• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 147
  • 90
  • 17
  • 9
  • 9
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 308
  • 146
  • 130
  • 56
  • 44
  • 44
  • 43
  • 42
  • 42
  • 41
  • 40
  • 30
  • 28
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Improving tiling, reducing compilation time, and extending the scope of polyhedral compilation / Amélioration du tuilage, réduction du temps de compilation, et extension de l'utilisabilité de la compilation polyédrique

Baghdadi, Mohamed Riyadh 25 September 2015 (has links)
Les processeurs multi-coeurs sont maintenant largement utilisés presque partout en informatique: ordinateurs de bureau, ordinateurs portables et accélérateurs tels que les GPGPU (General Purpose Graphics Processing Units). La difficulté de la programmation des systèmes parallèles est considérée comme un problème majeur qui va empêcher l'exploitation de leurs capacités dans le futur. Pour exploiter la puissance des processeurs multi-coeurs et les hiérarchies complexes de mémoire, il y a une grande nécessité pour utiliser des outils de parallélisation et d'optimisation automatique de code. L'optimisation polyédrique est un axe de recherche qui a comme but de résoudre ces problèmes. C'est est une représentation algébrique du programme et un ensemble d'analyses, de transformations et d'algorithmes de génération de code qui permettent à un compilateur de raisonner sur des transformations avancées de nids de boucle. Dans cette thèse, nous abordons certaines des limites du modèle polyédrique. Nous nous intéréssons particulièrement à trois problèmes et nous proposons des solutions pratiques à ces trois problèmes. Le premier problème est lié à la capacité d'appliquer l'optimisation de tuilage sur un code qui contient des fausses dépendances. Nous proposons une téchnique qui permet d'ignorer certaines fausses dépendences et donc qui permet d'appliquer l'optimisation de tuilage qui n'est pas possible sinon. Le second problème est lié au temps de compilation qui peut être trés long pour certains programmes. Nous proposons une téchnique qui transforme la représentation originale du programme à une nouvelle representation dans laquelle il y a moins d'instructions. L'optimisation de cette nouvelle représentation du programme est moins couteuse en terme de temps de compilation en comparaison avec l'optimisation de la représentation originale du programme. Le troisième problème est lié à deux limites: la première limite concerne la possibilité d'utiliser la compilation polyédrique sur des programmes qui ne resepectent pas les restrictions classiques du modèle polyédrique (un programme peut être représenté de façon précise dans le modèle polyédrique s'il ne contient pas des conditionnelles non-affines, des bornes de boucles non-affines et des accés non-affines). La seconde limite est liée à l'aptitude des outils à générer un code performant dans les performances se rapprochent des performances du code écrit à la main. Pour éviter ces deux limites, nous proposons un language de programmation que l'on appelle PENCIL, c'est un sous-ensemble de GNU C99 avec des règles de programmation spécifiques et quelques extensions. L'utilisation de ce sous-ensemble et l'utilisation de ces extensions permettent aux compilateurs de mieux exploiter le parallélisme et de mieux optimiser le code. / Multi-core processors are now in widespread use in almost all areas of computing: desktops, laptops and accelerators such as GPGPUs (General Purpose Graphics Processing Units). To harness the power of multi-core processors and complex memory hierarchies, the need for powerful compiler optimizations and especially loop nest transformations is now in high demand. The polyhedral optimization framework is showing promising results in addressing such a problem. It's an algebraic program representation and a set of analyses, transformations and code generation algorithms that enable a compiler to reason about advanced loop nest transformations addressing most of the parallelism and locality-enhancing challenges.In this thesis we address some of the limitations of the polyhedral framework. We address three problems and propose practical solutions to these three problems.The first problem is related to the ability to apply tiling on code that has false dependences (loop nest tiling is an optimization that changes the order of execution of statements in a loop nest in order to enhance data locality; false dependences are induced by the reuse of a single memory location to store multiple values during the life of the program). To preserve the validity of loop nest transformations and parallelization, data-dependences need to be analyzed. Memory dependences come in two varieties: true dependences (a.k.a. flow dependences) and false dependences (a.k.a. output and anti dependences). While true dependences must be satisfied in order to preserve the correct order of computations. False dependences reduce the degrees of freedom for loop transformations. In particular, loop tiling is severely limited in the presence of these dependences. While array expansion, a transformation that transforms scalars into arrays and arrays into higher dimensional arrays, removes all false dependences, the overhead of this transformation on memory and the detrimental impact on register-level reuse can be catastrophic. We propose and evaluate a compilation technique to safely ignore a large number of false dependences in order to enable loop nest tiling in the polyhedral model. It is based on the precise characterization of interferences between live range intervals, and it does not incur any scalar or array expansion.The second problem is related to the long compilation time that one may experience when using polyhedral tools to optimize a program. Particularly, the long execution time of the Pluto affine scheduling algorithm. The Pluto affine scheduling algorithm is the algorithm that is responsible for changing the schedule (order of execution) of statements in order to optimize the code (maximize parallelism and data locality). Reducing the execution time of this affine scheduling algorithm enhances the overall compilation time. We introduce and evaluate a technique called offline statement clustering. It is a practical technique designed to reduce the execution time of the Pluto affine scheduling algorithm without much loss in optimization opportunities. Using this technique, the statements of the program are clustered into macro-statements, the Pluto affine scheduling algorithm is then used to schedule the macro-statements instead of scheduling the original statements of the program. Since the number of macro-statements is less than the number of statements in the original program, scheduling the macro-statements is in general faster than scheduling the original statements of the program. We present the statement clustering algorithm, we show how offline statement clustering integrates transparently with the work-flow of a state-of-the-art polyhedral compiler and present two heuristics for choosing how statements should be clustered together. We show experimentally that statement clustering can reduce the scheduling time by a factor of 8x (in median) without a significant loss in optimization opportunities...
12

Etude d'un langage intermédiaire pour la compilation d'Algol 60 -<br />Application à un calculateur de type microprogrammé : CAE 510

Le Palmec, Jean 01 February 1966 (has links) (PDF)
.
13

Iterative compilation and performance prediction for numerical applications

Fursin, Grigori G. January 2004 (has links)
As the current rate of improvement in processor performance far exceeds the rate of memory performance, memory latency is the dominant overhead in many performance critical applications. In many cases, automatic compiler-based approaches to improving memory performance are limited and programmers frequently resort to manual optimisation techniques. However, this process is tedious and time-consuming. Furthermore, a diverse range of a rapidly evolving hardware makes the optimisation process even more complex. It is often hard to predict the potential benefits from different optimisations and there are no simple criteria to stop optimisations i.e. when optimal memory performance has been achieved or sufficiently approached. This thesis presents a platform independent optimisation approach for numerical applications based on iterative feedback-directed program restructuring using a new reasonably fast and accurate performance prediction technique for guiding optimisations. New strategies for searching the optimisation space, by means of profiling to find the best possible program variant, have been developed. These strategies have been evaluated using a range of kernels and programs on different platforms and operating systems. A significant performance improvement has been achieved using new approaches when compared to the state-of-the-art native static and platform-specific feedback directed compilers.
14

Compiling possibilistic graphical models : from inference to decision / Compilation des modèles graphiques possibilistes : de l'inférence à la décision

Ayachi, Raouia 18 January 2013 (has links)
Cette thèse traite deux problèmes importants dans le domaine du raisonnement et de la décision dans l'incertain. En premier lieu, nous développons des méthodes d'inférence basées sur la compilation pour les réseaux possibilistes. En effet, nous commençons par adapter au cadre possibiliste l'approche de base proposée, initialement, pour les réseaux Bayésiens et nous la raffinons, ensuite en utilisant la notion de structure locale. Nous proposons aussi une nouvelle stratégie de codage appelée structure locale possibiliste appropriée dans le cadre qualitatif. Nous implémentons, par ailleurs, une méthode purement possibiliste basée sur la transformation des réseaux possibilistes en bases de connaissances possibilistes. Notre deuxième contribution consiste à étendre nos approches d'inférence dans le cadre des réseaux causaux afin de calculer l'effet des observations et des interventions d'une manière efficace. Nous confrontons, en particulier, des approches basées sur la mutilation et celles basées sur l'augmentation. Finalement, nous étudionsl'aspect décisionnel sous compilation en étendant nos résultats portant sur la compilation des réseaux possibilistes afin d'évaluer les diagrammes d'influence possibilistes. Une étude expérimentale évaluant les différentes approches étudiées dans cette thèse est également présentée. / This thesis addresses two important issues in reasoning and decision making under uncertainty. At first, we have developed compilation-based inference methods dedicated to possibilistic networks. In fact, we have adapted the standard approach initially proposed for Bayesian networks into a possibilistic framework and we have refined it using local structure. We havealso proposed a new encoding strategy, called possibilistic local structure, exclusively useful in a qualitative framework. Moreover, we have implemented a purely possibilistic approach based on transforming possibilistic networks into possibilistic knowledge bases. Our second contribution consists in extending our inference approaches to possibilistic causal networks in order to efficiently compute the impact of both observations and interventions. We have confronted, in particular, mutilated-based approaches and augmented-based ones. Finally, we have explored the decision-making aspect under compilation by extending our results on compiling possibilistic networks to efficiently evaluate possibilistic influence diagrams. An experimental study evaluating the different approaches studied in this thesis is also presented.
15

Building Web Based Programming Environments for Functional Programming

Yoo, Daniel 26 April 2012 (has links)
Functional programming offers an accessible and powerful algebraic model for computing. JavaScript is the language of the ubiquitous Web, but it does not support functional programs well due to its single-threaded, asynchronous nature and lack of rich control flow operators. The purpose of this work is to extend JavaScript to a language environment that satisfies the needs of functional programs on the Web. This extended language environment uses sophisticated control operators to provide an event-driven functional programming model that cooperates with the browser's DOM, along with synchronous access to JavaScript's asynchronous APIs. The results of this work are used toward two projects: (1) a programming environment called WeScheme that runs in the web browser and supports a functional programming curriculum, and (2) a tool-chain called Moby that compiles event-driven functional programs to smartphones, with access to phone-specific features.
16

Langages pour l'écriture de compilateurs

Cohen, Jacques 01 June 1967 (has links) (PDF)
.
17

The Description of Large Systems

Pitman, Kent 01 September 1984 (has links)
In this paper we discuss the problems associated with the description and manipulation of large systems when their sources are not maintained as single fields. We show why and how tools that address these issues, such as Unix MAKE and Lisp Machine DEFSYSTEM, have evolved. Existing formalisms suffer from the problem that their syntax is not easily separable from their functionality. In programming languages, standard "calling conventions" exist to insulate the caller of a function from the syntactic details of how that function was defined, but until now no such conventions have existed to hide consumers of program systems from the details of how those systems were specified. We propose a low-level data abstraction which can support notations such as those used by MAKE and DEFSYSTEM without requiring that the introduction of a new notation be accompanied by a completely different set of tools for instantiating or otherwise manipulating the resulting system. Lisp is used for presentation, bit the issues are not idiosyncratic to LISP.
18

Compiling Java in linear nondeterministic space

Donnoe, Joshua January 1900 (has links)
Master of Science / Department of Computer Science / Torben Amtoft / Shannon’s and Chomsky’s attempts to model natural language with Markov chains showed differing gauges of language complexity. These were codified with the Chomsky Hierarchy with four types of languages, each with an accepting type of grammar and au- tomaton. Though still foundationally important, this fails to identify remarkable proper subsets of the types including recursive languages among recursively enumerable languages. In general, with Rice’s theorem, it is undecidable whether a Turing machine’s language is re- cursive. But specifically, Hopcroft & Ullman show that the languages of space bound Turing machines are recursive. We show the converse also to be true. The space hierarchy theorem shows that there is a continuum of proper subsets within the recursive languages. With Myhill’s description of a linear bounded automata, Landweber showed that they accept a subset of the type 1 languages including the type 2 languages. Kuroda expanded the definition making the automata nondeterministic and showed that nondeterministic linear space is the set of type 1 languages. That only one direction was proven deterministically but both nondeterministically, would suggest that nondeterminism increases expressiveness. This is further supported by Savitch’s theorem. However, it is not without precedent for predictions in computability theory to be wrong. Turing showed that Hilbert’s Entschei- dungsproblem is unsolvable and Immerman disproved Landweber’s belief that type 1 lan- guages are not closed under complementation. Currently, a major use of language theory is computer language processing including compilation. We will show that for the Java programming language, compilability can be computed in nondeterministic linear space by the existence of a (nondeterministic) linear bounded automaton which abstractly computes compilability. The automaton uses the tra- ditional pipeline architecture to transform the input in phases. The devised compiler will attempt to build a parse tree and then check its semantic properties. The first two phases, lexical and syntactical analysis are classic language theory tasks. Lexical analysis greedily finds matches to a regular language. Each match is converted to a token and printed to the next stream. With this, linearity is preserved. With a Lisp format, a parse tree can be stored as a character string which is still linear. Since the tree string preserves structural information from the program source, the tree itself serves as a symbol table, which normally would be separately stored in a readable efficient manner. Though more difficult than the previous step, this will also be shown to be linear. Lastly, semantic analysis, including typechecking, and reachability are performed by traversing the tree and annotating nodes. This implies that there must exist a context-sensitive grammar that accepts compilable Java. Therefore even though the execution of Java programs is Turing complete, their compilation is not.
19

Kompilační přístupy pro automatické plánování / Compilation-based Approaches for Automated Planning

Pantůčková, Kristýna January 2020 (has links)
One of the possible approaches to automated planning is compilation to sat- isfiability or constraint satisfaction. Compilation enables to take advantage of the advancement of SAT or CSP solvers. In this thesis, we implement three of the encodings recently proposed for compilation of planning problems: the model TCPP, the R2 ∃-Step encoding and the Reinforced Encoding. All these approaches search for parallel plans; however, since they use different definitions of parallel step and different variables and constraints, we decided to compare their per- formance on standard benchmarks from international planning competitions. As the R2 ∃-Step encoding was not suitable for our implementation, we present a mod- ified version of this encoding with a reduced number of variables and constraints. We also demonstrate how different definitions of parallel step in the Reinforced Encoding affect the performance. Furthermore, we suggest redundant constraints extending these encodings. Although they did not prove to be beneficial in gen- eral, they could slightly improve the performance on some benchmarks, especially in the R2 ∃-Step encoding.
20

Intégration d'un système d'exploitation dans le flot de développement logiciel/matériel

Julien, Marc January 2008 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.

Page generated in 0.0186 seconds