• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 147
  • 90
  • 17
  • 9
  • 9
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 308
  • 146
  • 130
  • 56
  • 44
  • 44
  • 43
  • 42
  • 42
  • 41
  • 40
  • 30
  • 28
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Continuation-Passing C : Transformations de programmes pour compiler la concurrence dans un langage impératif

Kerneis, Gabriel 09 November 2012 (has links) (PDF)
La plupart des programmes informatiques sont concurrents : ils doivent effectuer plusieurs tâches en même temps. Les threads et les événements sont deux techniques usuelles d'implémentation de la concurrence. Les événements sont généralement plus légers et efficaces que les threads, mais aussi plus difficiles à utiliser. De plus, ils sont souvent trop limités ; il est alors nécessaire d'écrire du code hybride, encore plus complexe, utilisant à la fois des threads ordonnancés préemptivement et des événements ordonnancés coopérativement. Nous montrons dans cette thèse que des programmes concurrents écrits dans un style à threads sont traduisibles automatiquement en programmes à événements équivalents et efficaces par une suite de transformations source-source prouvées. Nous proposons d'abord Continuation-Passing C, une extension du langage C pour l'écriture de systèmes concurrents qui offre des threads très légers et unifiés (coopératifs et préemptifs). Les programmes CPC sont transformés par le traducteur CPC pour produire du code à événements séquentialisé efficace, utilisant des threads natifs pour les parties préemptives. Nous définissons et prouvons ensuite la correction de ces transformations, en particulier le lambda lifting et la conversion CPS, pour un langage impératif. Enfin, nous validons la conception et l'implémentation de CPC en le comparant à d'autres bibliothèques de threads et en exhibant notre seeder BitTorrent Hekate. Nous justifions aussi notre choix du lambda lifting en implémentant eCPC, une variante de CPC utilisant les environnements, et en comparant ses performances à celles de CPC.
202

Completeness of Fact Extractors and a New Approach to Extraction with Emphasis on the Refers-to Relation

Lin, Yuan 07 August 2008 (has links)
This thesis deals with fact extraction, which analyzes source code (and sometimes related artifacts) to produce extracted facts about the code. These facts may, for example, record where in the code variables are declared and where they are used, as well as related information. These extracted facts are typically used in software reverse engineering to reconstruct the design of the program. This thesis has two main parts, each of which deals with a formal approach to fact extraction. Part 1 of the thesis deals with the question: How can we demonstrate that a fact extractor actually does its job? That is, does the extractor produce the facts that it is supposed to produce? This thesis builds on the concept of semantic completeness of a fact extractor, as defined by Tom Dean et al, and further defines source, syntax and compiler completeness. One of the contributions of this thesis is to show that in particular important cases (when the extractor is deterministic and its front end is idempotent), there is an efficient algorithm to determine if the extractor is compiler complete. This result is surprising, considering that in general it is undecidable if two programs are semantically equivalent, and it would seem that source code and its corresponding extracted facts are each essentially programs that are to be proved to be equivalent or at least sufficiently similar. The larger part of the thesis, Part 2, presents Algebraic Refers-to Analysis (ARA), a new approach to fact extraction with emphasis on the Refers-to relation. ARA provides a framework for specifying fact extraction, based on a three-step pipeline: (1) basic (lexical and syntactic) extraction, (2) a normalization step and (3) a binding step. For practical programming languages, these three steps are repeated, in stages and phases, until the Refers-to relation is computed. During the writing of this thesis, ARA pipelines for C, Java, C++, Fortran, Pascal and Ada have been designed. A prototype fact extractor for the C language has been created. Validating ARA means to demonstrate that ARA pipelines satisfy the programming language standards such as ISO C++ standard. In other words, we show that ARA phases (stages and formulas) are correctly transcribed from the rules in the language standard. Comparing with the existing approaches such as Attribute Grammar, ARA has the following advantages. First, ARA formulas are concise, elegant and more importantly, insightful. As a result, we have some interesting discovery about the programming languages. Second, ARA is validated based on set theory and relational algebra, which is more reliable than exhaustive testing. Finally, ARA formulas are supported by existing software tools such as database management systems and relational calculators. Overall, the contributions of this thesis include 1) the invention of the concept of hierarchy of completeness and the automatic testing of completeness, 2) the use of the relational data model in fact extraction, 3) the invention of Algebraic Refers-to Relation Analysis (ARA) and 4) the discovery of some interesting facts of programming languages.
203

Compilation certifiée de SCADE/LUSTRE

Auger, Cédric 07 February 2013 (has links) (PDF)
Les langages synchrones sont apparus autour des années quatre-vingt, en réponse à un besoin d'avoir un modèle mathématique simple pour implémenter des systèmes temps réel critiques. Dans ce modèle, le temps est découpé en instants discrets durant lesquels tous les composants du système reçoivent et produisent une donnée. Cette modélisation permet des raisonnements beaucoup plus simples en évitant de devoir prendre en compte le temps de calcul de chaque opération. Dans le monde du logiciel critique, la fiabilité du matériel et de son fonctionnement sont primordiaux, et on accepte d'être plus lent si on devient plus sûr. Afin d'augmenter cette fiabilité, plutôt que de concevoir manuellement tout le système, on utilise des machines qui synthétisent automatiquement le système souhaité à partir d'une description la plus concise possible. Dans le cas du logiciel, ce mécanisme s'appelle la compilation, et évite des erreurs introduites par l'homme par inadvertance. Elle ne garantit cependant pas la bonne correspondance entre le système produit et la description donnée. Des travaux récents menés par une équipe INRIA dirigée par Xavier Leroy ont abouti en 2008 au compilateur CompCert d'un sous-ensemble large de C vers l'assembleur PowerPC pour lequel il a été prouvé dans l'assistant de preuve Coq que le code assembleur produit correspond bien à la description en C du programme source. Un tel compilateur offre des garanties fortes de bonne correspondance entre le système synthétisé et la description donnée. De plus, avec les compilateurs utilisés pour le temps réel critique, la plupart des optimisations sont désactivées afin d'éviter les erreurs qui y sont liées. Dans CompCert, des optimisations elles aussi prouvées sont proposées, ce qui pourrait permettre ces passes dans la production de systèmes temps réel critiques sans en compromettre la fiabilité. Le but de cette thèse est d'avoir une approche similaire mais spécifique à un langage synchrone, donc plus approprié à la description de systèmes temps réel critiques que ne l'est le C. Un langage synchrone flots de données semblable à Lustre, nommé Ls, et un langage impératif semblable au langage C, nommé Obc y sont proposés ainsi que leur sémantique formelle et une chaîne de compilation avec des preuves de préservation de sémantique le long de cette chaîne.
204

Completeness of Fact Extractors and a New Approach to Extraction with Emphasis on the Refers-to Relation

Lin, Yuan 07 August 2008 (has links)
This thesis deals with fact extraction, which analyzes source code (and sometimes related artifacts) to produce extracted facts about the code. These facts may, for example, record where in the code variables are declared and where they are used, as well as related information. These extracted facts are typically used in software reverse engineering to reconstruct the design of the program. This thesis has two main parts, each of which deals with a formal approach to fact extraction. Part 1 of the thesis deals with the question: How can we demonstrate that a fact extractor actually does its job? That is, does the extractor produce the facts that it is supposed to produce? This thesis builds on the concept of semantic completeness of a fact extractor, as defined by Tom Dean et al, and further defines source, syntax and compiler completeness. One of the contributions of this thesis is to show that in particular important cases (when the extractor is deterministic and its front end is idempotent), there is an efficient algorithm to determine if the extractor is compiler complete. This result is surprising, considering that in general it is undecidable if two programs are semantically equivalent, and it would seem that source code and its corresponding extracted facts are each essentially programs that are to be proved to be equivalent or at least sufficiently similar. The larger part of the thesis, Part 2, presents Algebraic Refers-to Analysis (ARA), a new approach to fact extraction with emphasis on the Refers-to relation. ARA provides a framework for specifying fact extraction, based on a three-step pipeline: (1) basic (lexical and syntactic) extraction, (2) a normalization step and (3) a binding step. For practical programming languages, these three steps are repeated, in stages and phases, until the Refers-to relation is computed. During the writing of this thesis, ARA pipelines for C, Java, C++, Fortran, Pascal and Ada have been designed. A prototype fact extractor for the C language has been created. Validating ARA means to demonstrate that ARA pipelines satisfy the programming language standards such as ISO C++ standard. In other words, we show that ARA phases (stages and formulas) are correctly transcribed from the rules in the language standard. Comparing with the existing approaches such as Attribute Grammar, ARA has the following advantages. First, ARA formulas are concise, elegant and more importantly, insightful. As a result, we have some interesting discovery about the programming languages. Second, ARA is validated based on set theory and relational algebra, which is more reliable than exhaustive testing. Finally, ARA formulas are supported by existing software tools such as database management systems and relational calculators. Overall, the contributions of this thesis include 1) the invention of the concept of hierarchy of completeness and the automatic testing of completeness, 2) the use of the relational data model in fact extraction, 3) the invention of Algebraic Refers-to Relation Analysis (ARA) and 4) the discovery of some interesting facts of programming languages.
205

Génération automatique de parties opératives de circuits VLSI de type microprocesseur

Jamier, Robert Courtois, Bernard January 2008 (has links)
Reproduction de : Thèse de docteur-ingénieur : informatique : Grenoble, INPG : 1986. / Titre provenant de l'écran-titre. Bibliogr. p. 233-241.
206

Dynamics, Processes and Characterization in Classical and Quantum Optics

Gamel, Omar 09 January 2014 (has links)
We pursue topics in optics that follow three major themes; time averaged dynamics with the associated Effective Hamiltonian theory, quantification and transformation of polarization, and periodicity within quantum circuits. Within the first theme, we develop a technique for finding the dynamical evolution in time of a time averaged density matrix. The result is an equation of evolution that includes an Effective Hamiltonian, as well as decoherence terms that sometimes manifest in a Lindblad-like form. We also apply the theory to examples of the AC Stark Shift and Three-Level Raman Transitions. In the theme of polarization, the most general physical transformation on the polarization state has been represented as an ensemble of Jones matrix transformations, equivalent to a completely positive map on the polarization matrix. This has been directly assumed without proof by most authors. We follow a novel approach to derive this expression from simple physical principles, basic coherence optics and the matrix theory of positive maps. Addressing polarization measurement, we first establish the equivalence of classical polarization and quantum purity, which leads to the identical structure of the Poincar\' and Bloch spheres. We analyze and compare various measures of polarization / purity for general dimensionality proposed in the literature, with a focus on the three dimensional case. % entanglement? In pursuit of the final theme of periodic quantum circuits, we introduce a procedure that synthesizes the circuit for the simplest periodic function that is one-to-one within a single period, of a given period p. Applying this procedure, we synthesize these circuits for p up to five bits. We conjecture that such a circuit will need at most n Toffoli gates, where p is an n-bit number. Moreover, we apply our circuit synthesis to compiled versions of Shor's algorithm, showing that it can create more efficient circuits than ones previously proposed. We provide some new compiled circuits for experimentalists to use in the near future. A layer of "classical compilation" is pointed out as a method to further simplify circuits. Periodic and compiled circuits should be helpful for creating experimental milestones, and for the purposes of validation.
207

Dynamics, Processes and Characterization in Classical and Quantum Optics

Gamel, Omar 09 January 2014 (has links)
We pursue topics in optics that follow three major themes; time averaged dynamics with the associated Effective Hamiltonian theory, quantification and transformation of polarization, and periodicity within quantum circuits. Within the first theme, we develop a technique for finding the dynamical evolution in time of a time averaged density matrix. The result is an equation of evolution that includes an Effective Hamiltonian, as well as decoherence terms that sometimes manifest in a Lindblad-like form. We also apply the theory to examples of the AC Stark Shift and Three-Level Raman Transitions. In the theme of polarization, the most general physical transformation on the polarization state has been represented as an ensemble of Jones matrix transformations, equivalent to a completely positive map on the polarization matrix. This has been directly assumed without proof by most authors. We follow a novel approach to derive this expression from simple physical principles, basic coherence optics and the matrix theory of positive maps. Addressing polarization measurement, we first establish the equivalence of classical polarization and quantum purity, which leads to the identical structure of the Poincar\' and Bloch spheres. We analyze and compare various measures of polarization / purity for general dimensionality proposed in the literature, with a focus on the three dimensional case. % entanglement? In pursuit of the final theme of periodic quantum circuits, we introduce a procedure that synthesizes the circuit for the simplest periodic function that is one-to-one within a single period, of a given period p. Applying this procedure, we synthesize these circuits for p up to five bits. We conjecture that such a circuit will need at most n Toffoli gates, where p is an n-bit number. Moreover, we apply our circuit synthesis to compiled versions of Shor's algorithm, showing that it can create more efficient circuits than ones previously proposed. We provide some new compiled circuits for experimentalists to use in the near future. A layer of "classical compilation" is pointed out as a method to further simplify circuits. Periodic and compiled circuits should be helpful for creating experimental milestones, and for the purposes of validation.
208

Certified Compilation and Worst-Case Execution Time Estimation

Oliveira Maroneze, André 17 June 2014 (has links) (PDF)
Safety-critical systems - such as electronic flight control systems and nuclear reactor controls - must satisfy strict safety requirements. We are interested here in the application of formal methods - built upon solid mathematical bases - to verify the behavior of safety-critical systems. More specifically, we formally specify our algorithms and then prove them correct using the Coq proof assistant - a program capable of mechanically checking the correctness of our proofs, providing a very high degree of confidence. In this thesis, we apply formal methods to obtain safe Worst-Case Execution Time (WCET) estimations for C programs. The WCET is an important property related to the safety of critical systems, but its estimation requires sophisticated techniques. To guarantee the absence of errors during WCET estimation, we have formally verified a WCET estimation technique based on the combination of two main methods: a loop bound estimation and the WCET estimation via the Implicit Path Enumeration Technique (IPET). The loop bound estimation itself is decomposed in three steps: a program slicing, a value analysis based on abstract interpretation, and a loop bound calculation stage. Each stage has a chapter dedicated to its formal verification. The entire development has been integrated into the formally verified C compiler CompCert. We prove that the final estimation is correct and we evaluate its performances on a set of reference benchmarks. The contributions of this thesis include (a) the formalization of the techniques used to estimate the WCET, (b) the estimation tool itself (obtained from the formalization), and (c) the experimental evaluation. We conclude that our formally verified development obtains interesting results in terms of precision, but it requires special precautions to ensure the proof effort remains manageable. The parallel development of specifications and proofs is essential to this end. Future works include the formalization of hardware cost models, as well as the development of more sophisticated analyses to improve the precision of the estimated WCET.
209

Compilation optimisante pour processeurs extensibles

Floc'h, Antoine 08 June 2012 (has links) (PDF)
Les processeurs à jeu d'instructions spécifiques (ASIP) constituent un compromis entre les performances d'un circuit matériel dédié et la flexibilité d'un processeur programmable. Ces processeurs spécialisés peuvent être composés d'un processeur généraliste dont le jeu d'instructions est étendu par des instructions spécifiques à une ou plusieurs applications et qui sont exécutées sur une extension matérielle. On parle alors de processeurs extensibles. Si le coût de conception et de vérification de telles architectures est considérablement réduit en comparaison à une conception complète, la complexité est en partie reportée sur l'étape de compilation. En effet, le jeu d'instructions d'un processeur extensible est à la fois une entrée et une sortie du processus de compilation. Cette thèse propose plusieurs contributions pour guider le processus de conception de telles architectures à travers des techniques d'optimisations adaptées aux processeurs extensibles. La première de ces contributions consiste à sélectionner et à ordonnancer les instructions spécialisées VLIW en résolvant un unique problème d'optimisation de programmation par contraintes (CP). D'autre part, nous proposons une technique originale qui traite de l'interaction entre l'optimisation de code et l'extension de jeu d'instructions. Le principe est de transformer automatiquement le code original des nids de boucles d'un programme (à l'aide du modèle polyédrique) afin de sélectionner des instructions spécialisées vectorisables et dont les données temporaires, produites lors d'une itération de boucle, sont mémorisées sur l'extension matérielle du processeur.
210

Decoupled approaches to register and software controlled memory allocations

Diouf, Boubacar 15 December 2011 (has links) (PDF)
Despite the benefit of the memory hierarchy, it is still essential, in order to reduce accesses to higher levels of memory, to have an efficient usage of registers and local memories (also called scratchpad memories) present in most embedded processors, graphical processors (GPUs) and network processors. During the compilation, from a source language to an executable code, there are two optimizations that are of utmost importance: the register allocation and the local memory allocation. In this thesis's report we are interested in decoupled approaches, solving separately the allocation and assignment problems, that helps to improve the quality of the register and local memory allocations. In the first part of this thesis we are interested in two aspects of the register allocation problem: the improvements of the just-in-time (JIT) register allocation and the spill minimization problem. We introduce the split register allocation which leverages the decoupled approach to improve register allocation in the context of JIT compilation. We experimentally validate the effectiveness of split register allocation and its portability with respect to register count variations, relying on annotations whose impact on the bytecode size is negligible. We introduce a new decoupled approach, called iterated-optimal allocation, which focus on the spill minimization problem. The iterated-optimal allocation algorithm achieves results close to optimal while offering pseudo-polynomial guarantees for SSA programs and fast allocations on general programs. In the second part of this thesis, we study how a decoupled local memory allocation can be proposed in light of recent progresses in register allocation. We first validate our intuition for decoupled approach to local memory allocation. Then, we study the local memory allocation in a more theoretical way setting the junction between local memory allocation for linearized programs and weighted interval graph coloring. We design and analyze a new variant of the ship-building problem called the submarine-building problem. We show that this problem is NP-complete on interval graphs, while it is solvable in linear time for proper interval graphs, equivalent to unit interval graphs. The submarine-building problem is the first problem that is known to be NP-complete on interval graphs, while it is solvable in linear time for unit interval graphs. In the third part of this thesis, we propose a heuristic-based solution, the clustering allocator, which decouples the local memory allocation problem and aims to minimize the allocation cost. The clustering allocator while devised for local memory allocation, it appears to be a very good solution to the register allocation problem. After many years of separation, this new algorithm seems to be a bridge to reconcile the local memory allocation and the register allocation problems.

Page generated in 0.0726 seconds