Spelling suggestions: "subject:"compilation"" "subject:"kompilation""
171 |
Graph-based features for machine learning driven code optimization / Maskinlärnings-driven programkodsoptimering med graf-baserad datautvinningKindestam, Anton January 2017 (has links)
In this paper we present a method of using the Shortest-Path Graph Kernel, on graph-based features of computer programs, to train a Support Vector Regression model which predicts execution time speedup over baseline given an unseen program and a point in optimization space, based on a method proposed in Using Graph-Based Program Characterization for Predictive Modeling by Park et al. The optimization space is represented by command-line parameters to the polyhedral C-to-C compiler PoCC, and PolyBench is used to generate the data set of speedups over baseline. The model is found to produce results reasonable by some metrics, but due to the large error and the pseudo-random behaviour of the output the method, in its current form, must reluctantly be rejected. / I den här raporten presenterar vi en metod att träna en Stöd-vektor-regressions-modell som givet ett osett program och en punkt i optimeringsrymden kan förutsäga hur mycket snabbare över baslinjen programmet kommer att exekvera förutsatt att man applicerar givna optimeringar. För att representera programmet använder vi en grafstruktur för vilken vi kan använda en grafkärna, Shortest-Path Graph Kernel, vilken kan avgöra hur lika två olika grafer är varandra. Metoden är baserad på en metod presenterad av Park et al. i Using Graph-Based Program Characterization for Predictive Modeling. Optimeringsrymden erhålls genom olika kombinationer av kommandoradsparametrar till den polyhedriska C-till-C-kompilatorn PoCC. Testdatat erhölls genom att förberäkna hastighetsfaktorer för alla optimeringar och alla program i test-algoritms-biblioteket PolyBench. Vi finner att modellen i vissa mått mätt producerar "bra" resultat, men p.g.a. av det stora felet och det slumpmässiga beteendet måste dessvärre metoden, i dess nuvarande form,förkastas.
|
172 |
Adding hygiene to gambit schemeDoucet, Antoine 07 1900 (has links)
Le langage de programmation Scheme est reconnu pour son puissant
système de macro-transformations. La représentation
du code source d'un programme, sous forme de données manipulables
par le langage,
permet aux programmeurs de modifier directement
l'arbre de syntaxe abstraite sous-jacent.
Les macro-transformations
utilisent une syntaxe similaire aux procédures régulières mais,
elles définissent plutôt des procédures à exécuter
lors de la phase de compilation.
Ces procédures retournent une représentation sous
forme d'arbre de syntaxe abstraite qui devra être substitué
à l'emplacement de l'appel du transformateur. Les procédures
exécutées durant la phase de compilation profitent
de la même puissance que celles exécutées durant de la phase d'évaluation.
Avec ce genre de système de macro-transformations,
un programmeur peut créer des règles de syntaxe spécialisées
sans aucun coût additionnel en performance:
ces extensions syntactiques
permettent l'abstraction de code sans les coûts d'exécution
habituels reliés à la création d'une fermeture sur le tas.
Cette représentation pour le code source de Scheme provient
directement du langage de programmation Lisp. Le code source
est représenté sous forme de listes manipulables
de symboles, ou bien de
listes contenants d'autres listes: une structure appelée
S-expression. Cependant, avec cette approche simpliste,
des conflits de noms peuvent apparaître.
En effet, l'association référée par un certain identifiant
est déterminée exclusivement par
le contexte lexical de celui-ci.
En déplaçant un identifiant dans l'arbre de syntaxe abstraite,
il est possible que cet identifiant se retrouve dans
un contexte lexical contenant une certaine association pour un identifiant du même nom.
Dans de tels cas,
l'identifiant déplacé pourrait ne plus référer à l'association
attendue, puisque cette seconde
association pourrait avoir prévalence sur
la première. L'assurance de transparence référentielle est alors perdue.
En conséquence, le choix de nom pour les identifiants
vient maintenant influencer directement
le comportement du programme,
générant des erreurs difficiles à comprendre.
Les conflits de noms
peuvent être corrigés manuellement dans le code en utilisant,
par exemple, des noms d'identifiants uniques.
La préservation automatique de la transparence référentielle
se nomme hygiène, une notion qui a été beaucoup
étudiée dans le contexte
des langages de la famille Lisp.
La dernière version du Scheme revised report, utilisée
comme spécification pour le langage, étend ce dernier
avec un support pour les macro-transformations hygiéniques.
Jusqu'à maintenant,
l'implémentation Gambit de Scheme ne fournissait
pas de tel système à sa base. Comme contribution, nous
avons ré-implémenter le système de macro de Gambit pour
supporter les macro-transformations hygiéniques au plus bas niveau
de l'implémentation. L'algorithme choisi se base sur l'algorithme
set of scopes implémenté dans le langage Racket et créé par Matthew Flatt.
Le langage Racket s'est grandement inspiré
du langage Scheme mais, diverge
sur plusieurs fonctionnalités importantes. L'une de
ces différences est le puissant système de macro-transformation
sur lequel Racket base la majorité de ses primitives.
Dans ce contexte, l'algorithme a donc été testé
de façon robuste.
Dans cette thèse, nous donnerons un aperçu du langage
Scheme et de sa syntaxe. Nous énoncerons le problème d'hygiène
et décrirons différentes stratégies utilisées
pour le résoudre. Nous justifierons par la suite
notre choix d'algorithme et fourniront une définition
formelle. Finalement, nous présenterons une analyse
de la validité et de la performance du compilateur en
comparant la version originale de Gambit avec notre
version supportant l'hygiène. / The Scheme programming language is known for
its powerful macro system.
With Scheme source code represented as actual Scheme data,
macro transformations
allow the programmer, using that data, to act directly on the
underlying abstract syntax tree.
Macro transformations use a similar syntax to
regular procedures but, they define procedures
meant to be executed at compile time.
Those procedures return an abstract syntax tree representation
to be substituted at the transformer's call location.
Procedures executed at compile-time use the same
language power as run-time procedures.
With the macro system,
the programmer can create specialized
syntax rules without additional performance costs.
This also allows for code abstractions
without the expected run-time cost of closure creations.
Scheme's representation of source code using values
inherits that virtue from the Lisp programming language.
Source code is represented as a list of symbols, or lists
of other lists: a structure coined S-expressions.
However, with this simplistic approach,
accidental name clashes can occur.
The binding to which an identifier refers to
is determined by the lexical context of that identifier.
By moving an identifier around in the abstract syntax tree,
it can be caught within the lexical context of another binding definition with the same name.
This can cause unexpected behavior for programmers
as the choice of names can create substantial changes
in the program.
Accidental name clashes can be manually fixed in the code,
using name obfuscation, for instance.
However, the programmer becomes responsible
for the program's safety.
The automatic preservation of referential transparency
is called hygiene and was
thoroughly studied in the context
of lambda calculus and Lisp-like languages.
The latest Scheme revised report, used as a specification for the
language, extend the language with hygienic macro
transformations.
Up to this point, the Gambit Scheme implementation
wasn't providing a built-in hygienic macro system.
As a contribution, we re-implemented Gambit's
macro system to support hygienic transformations
at its core.
The algorithm we chose is
based on the set of scopes algorithm, implemented in the
Racket language by Matthew Flatt.
The Racket language is heavily based on Scheme but,
diverges on some core features.
One key aspect of the Racket language is
its extensive hygienic syntactic macro system, on
which most core features are built on:
the algorithm
was robustly tested in that context.
In this thesis, we will give an overview of the Scheme language
and its syntax. We will state the hygiene problem and describe
different strategies used to enforce hygiene automatically.
Our algorithmic
choice is then justified and formalized. Finally, we
present the original Gambit macro system and explain
the changes required. We also provide a validity and performance
analysis, comparing the original Gambit implementation to
our new system.
|
173 |
Functional Programming Languages and the JVM : Comparing Functional Language Compilation Techniques for the JVM / Funktionell Programmering och JVM:en : Jamföring av kompileringstekniker av funktionella språk till JVM:enOlofsson, Asta January 2023 (has links)
Because of its security, high availability, and automatic memory management, the JVM (Java Virtual Machine) is a desirable execution environment. However, since the JVM was originally made for Java, which is an objectoriented language, it is hard for languages with other paradigms to run on it. This thesis explores the challenges of compiling a functional language for the JVM. We implement a backend generating JVM bytecode as an extension to the compiler of MCore, a minimal functional language based on the lambda calculus. The backend features two different ways of compiling functions. One generates classes representing functions and their environment at compile-time, and one creates these function classes dynamically at runtime. The two different versions of compiling functions are compared in regards to execution speed of the outputted bytecode, compiler output size, and recursion depth. The results show that the two different ways of compiling functions perform well on different metrics. / På grund av sin säkerhet, tillgänglighet och automatiska minneshantering är JVM:en (Java Virtual Machine) en önksvärd exekveringsmiljö. På grund av att JVM:en ursprungligen skapades för programmeringsspråket Java, som är ett objektorienterat språk, så är det svårt för språk som följer andra paradigmer att köra på JVM:en. Denna uppsats undersöker utmaningarna som uppstår vid kompilering av ett funktionellt språk på JVM:en genom en litteraturstudie. Vi implementerar en backend som genererar JVM bytekod som en utökning av kompilatorn för MCore, ett minimalt funktionellt språk baserat på lambdakalkylen. Backenden implementeras med två tekniker för att kompilera funktioner. Den ena genererar klasser som representerar funktioner och deras miljö under kompileringstiden och den andra genererar dessa funktionsklasser dynamiskt vid körtid. Vi jämför de två teknikerna med avseende på den producerade bytekodens exekveringstid, storlek och rekursionsdjup. Resultaten visar att de två kompilationsteknikerna av funktioner presterar olika på olika mätningar.
|
174 |
Compiler Techniques for Transformation Verification, Energy Efficiency and Cache ModelingBao, Wenlei 13 September 2018 (has links)
No description available.
|
175 |
Deep Learning for Code Generation using Snippet Level Parallel DataJain, Aneesh 05 January 2023 (has links)
In the last few years, interest in the application of deep learning methods for software engineering tasks has surged. A variety of different approaches like transformer based methods, statistical machine translation models, models inspired from natural language settings have been proposed and shown to be effective at tasks like code summarization, code synthesis and code translation. Multiple benchmark data sets have also been released but all suffer from one limitation or the other. Some data sets only support a select few programming languages while others support only certain tasks. These limitations restrict researchers' ability to be able to perform thorough analyses of their proposed methods. In this work we aim to alleviate some of the limitations faced by researchers who work in the paradigm of deep learning applications for software engineering tasks. We introduce a large, parallel, multi-lingual programming language data set that supports tasks like code summarization, code translation, code synthesis and code search in 7 different languages. We provide benchmark results for the current state of the art models on all these tasks and we also explore some limitations of current evaluation metrics for code related tasks. We provide a detailed analysis of the compilability of code generated by deep learning models because that is a better measure of ascertaining usability of code as opposed to scores like BLEU and CodeBLEU. Motivated by our findings about compilability, we also propose a reinforcement learning based method that incorporates code compilability and syntax level feedback as rewards and we demonstrate it's effectiveness in generating code that has less syntax errors as compared to baselines. In addition, we also develop a web portal that hosts the models we have trained for code translation. The portal allows translation between 42 possible language pairs and also allows users to check compilability of the generated code. The intent of this website is to give researchers and other audiences a chance to interact with and probe our work in a user-friendly way, without requiring them to write their own code to load and inference the models. / Master of Science / Deep neural networks have now become ubiquitous and find their applications in almost every technology and service we use today. In recent years, researchers have also started applying neural network based methods to problems in the software engineering domain. Software engineering by it's nature requires a lot of documentation, and creating this natural language documentation automatically using programs as input to the neural networks has been one their first applications in this domain. Other applications include translating code between programming languages and searching for code using natural language as one does on websites like stackoverflow. All of these tasks now have the potential to be powered by deep neural networks. It is common knowledge that neural networks are data hungry and in this work we present a large data set containing codes in multiple programming languages like Java, C++, Python, C#, Javascript, PHP and C. Our data set is intended to foster more research in automating software engineering tasks using neural networks. We provide an analysis of performance of multiple state of the art models using our data set in terms of compilability, which measures the number of syntax errors in the code, as well as other metrics. In addition, propose our own deep neural network based model for code translation, which uses feedback from programming language compilers in order to reduce the number of syntax errors in the generated code. We also develop and present a website where some of our code translation models have been hosted. The website allows users to interact with our work in an easy manner without any knowledge of deep learning and get a sense of how these technologies are being applied for software engineering tasks.
|
176 |
Mapping Genotype to Phenotype using Attribute GrammarAdam, Laura 20 September 2013 (has links)
Over the past 10 years, several synthetic biology research groups have proposed tools and domain-specific languages to help with the design of artificial DNA molecules. Community standards for exchanging data between these tools, such as the Synthetic Biology Open Language (SBOL), have been developed. It is increasingly important to be able to perform in silico simulation before the time and cost consuming wet lab realization of the constructs, which, as technology advances, also become in themselves more complex. By extending the concept of describing genetic expression as a language, we propose to model relations between genotype and phenotype using formal language theory.
We use attribute grammars (AGs) to extract context-dependent information from genetic constructs and compile them into mathematical models, possibly giving clues about their phenotypes. They may be used as a backbone for biological Domain-Specific Languages (DSLs) and we developed a methodology to design these AG based DSLs. We gave examples of languages in the field of synthetic biology to model genetic regulatory networks with Ordinary Differential Equations (ODEs) based on various rate laws or with discrete boolean network models.
We implemented a demonstration of these concepts in GenoCAD, a Computer Assisted Design (CAD) software for synthetic biology. GenoCAD guides users from design to simulation. Users can either design constructs with the attribute grammars provided or define their own project-specific languages. Outputting the mathematical model of a genetic construct is performed by DNA compilation based on the attribute grammar specified; the design of new languages by users necessitated the generation on-the-fly of such attribute grammar based DNA compilers.
We also considered the impact of our research and its potential dual-use issues. Indeed, after the design exploration is performed in silico, the next logical step is to synthesize the designed construct's DNA molecule to build the construct in vivo. We implemented an algorithm to identify sequences of concern of any length that are specific to Select Agents and Toxins, helping to ensure safer use of our methods. / Ph. D.
|
177 |
CO-DESIGN OF QUANTUM SOFTWARE AND HARDWAREJinglei Cheng (18975923) 05 July 2024 (has links)
<p dir="ltr">Quantum computing is advancing rapidly, with variational quantum algorithms (VQAs) showing great promise for demonstrating quantum advantage on near-term devices. A critical component of VQAs is the ansatz, a parameterized quantum circuit that is iteratively optimized. However, compiling ansatz circuits for specific quantum hardware is challenging due to topological constraints, gate errors, and decoherence. This thesis presents a series of techniques to efficiently generate and optimize quantum circuits, with a focus on VQAs. We first introduce AccQOC, a framework combining static pre-compilation with accelerated dynamic compilation to transform quantum gates to hardware pulses using quantum optimal control (QOC). AccQOC generates pulses for frequently used gate sequences in advance and stores them in a lookup table. For new gate sequences, it utilizes a Minimum Spanning Tree based approach to find the optimal compilation order that maximizes the similarity between consecutive sequences, thereby accelerating the compilation process. By leveraging pre-computed pulses and employing a similarity-based approach, AccQOC achieves a 9.88×speedup in compilation time compared to standard QOC methods while maintaining a 2.43×latency reduction over gate-based compilation. Building on AccQOC, we propose EPOC, an extended framework integrating circuit partitioning, ZX-calculus optimization, and synthesis methods. EPOC operates at a finer granularity compared to previous coarse-grained approaches, decomposing circuits into smaller sub-circuits based on the number of qubits and circuit depth. It then applies synthesis techniques to identify equivalent representations with reduced gate count. The optimized sub-circuits are then grouped into larger unitary matrices, which are used as inputs for QOC. This approach enables increased parallelism and reduced latency in the resulting quantum pulses. Compared to the state-of-the-art pulse optimization framework, EPOC achieves a 31.74% reduction in circuit latency and a 76.80% reduction compared to gate-based methods. To construct hardware-efficient ansatz for VQAs, we introduce two novel approaches. TopGen is a topology-aware bottom-up approach that generates sub-circuits according to the connectivity constraints of the target quantum device. It starts by generating a library of subcircuits that are compatible with the device topology and evaluates them based on metrics 17 like expressibility and entangling capability. The sub-circuits with the best properties are then selected and progressively combined using different techniques. TopGen also employs dynamic circuit growth, where small sub-circuits are appended to the ansatz during training, and gate pruning, which removes gates with small parameters. Evaluated on a range of VQA tasks, TopGen achieves an average reduction of 50% in circuit depth after compilation compared to previous methods. NAPA takes a more direct approach by utilizing devicenative parametric pulses as the fundamental building blocks for constructing the ansatz. It uses cross-resonance pulses for entangling qubits and DRAG pulses for single-qubit rotations. The ansatz is constructed in a hardware-efficient manner. By using the better flexibility and expressivity of parametric pulses, NAPA demonstrates up to 97.3% latency reduction while maintaining accuracy comparable to gate-based approaches when evaluated on real quantum devices. Finally, we explore error mitigation techniques for VQAs at the pulse level. We develop a fidelity estimator based on reversed pulses, that enables randomized benchmarking of parametric pulses. This estimator compares the final state obtained after applying a sequence of pulses followed by their reversed counterparts to the initial state, using the probability of successful trials as a proxy for fidelity. Furthermore, we adapt the zero-noise extrapolation (ZNE) technique to the pulse level, enabling the error mitigation for quantum pulses. Applied to VQE tasks for H2 and HeH+ molecules, pulse-level ZNE reduces the deviation from ideal expectation values by an average of 54.1%. The techniques developed in this thesis advance the efficiency and practicality of VQAs on near-term quantum devices. The introduced frameworks, AccQOC and EPOC, provide efficient pulse optimization, while TopGen and NAPA can construct hardware-efficient ansatz. Besides, the pulse-level error mitigation techniques presented in this thesis improve the resilience of VQAs against the inherent noise and imperfections of NISQ devices. Together, these contributions help unlock the full potential of quantum computing and realize practical quantum advantages in the near future.</p>
|
178 |
Écrire les vies des Neuf Preux et des Neuf Preuses à la fin du Moyen Âge : étude et édition critique partielle du Traité des Neuf Preux et des Neuf Preuses de Sébastien Mamerot (Josué, Alexandre, Arthur; les Neuf Preuses) / Telling the lives of the Nine Worthies and the Nine Worthy Women in the late Middle Ages : a study and partial critical edition of the Traité des Neuf Preux et des Neuf Preuses by Sébastien Mamerot (Joshua, Alexander, Arthur; the Nine Worthy Women)Salamon, Anne 26 November 2011 (has links)
Le Traité des Neuf Preux et des Neuf Preuses, composé entre 1460 et 1468 par Sébastien Mamerot pour son seigneur Louis de Laval, nous est parvenu dans un exemplaire unique, richement enluminé (ÖNB, cod. 2577-2578). Ce travail vise à fournir une étude de ce texte inédit ainsi qu’une édition partielle. Le motif des Neuf Preux et des Neuf Preuses se développe dans la littérature et les arts figuratifs à partir d'un passage des Vœux du paon de Jacques de Longuyon. Suite au succès des œuvres de Boccace et de la thématique des hommes et femmes illustres, naissent trois compilations, organisées sur le principe de la collection de biographies et exclusivement consacrées aux Neuf Preux. Le texte de Sébastien Mamerot est le premier de ces trois textes, plus ou moins contemporains, mais indépendants. Par ailleurs, il est le seul à accorder une place aux Preuses. Alors que le motif est souvent réduit à l'évocation d'une liste de noms, ces textes constituent de vastes compilations historiques retraçant la généalogie et les hauts faits des neuf héros et héroïnes. Véritables livres-bibliothèques, puisqu'ils comportent une biographie de plusieurs des figures de prédilection du Moyen Âge, ces textes nous permettent d'avoir un certain regard sur les œuvres des siècles précédents et leur faveur au XVe s., nous fournissant un véritable guide des lectures historiques de l'époque. Le Traité des Neuf Preux et des Neuf Preuses se trouve au carrefour d'influences culturelles et littéraires diverses : la volonté de mettre par écrit et de redonner chair à ces listes de personnages bien connus en écrivant leurs vies incarne la sensibilité de l'époque aux liens entre renommée, écriture et mémoire. / Le Traité des Neuf Preux et des Neuf Preuses, written between 1460 and 1468 by Sébastien Mamerot for his lord Louis de Laval, has survived in only one manuscript, richly illuminated. The purpose of this thesis is to provide a commentary of this text as well as a partial critical edition. The topos of the Nine Worthies and Nine Worthy Women spread through literature and the figurative arts from the Vœux du paon by Jacques de Longuyon. Following the success of Boccaccio's works and that of the Famous Men and Women theme, three compilations devoted to the Worthies were written, organised as collections of biographies. Sébastien Mamerot's text is the first of these three works that, although more or less contemporary, are nonetheless independent from each other. Moreover his is the only one to deal with the Nine Worthy Women. Whereas the topos of the Nine Worthies is often reduced to a list of names, these texts consist of vast historical compilations telling the genealogy and high deeds of the heroes and heroines. Each book is like a small library in so far as it contains biographies of some of the favourite figures of the Middle Ages; through them we can have an outlook on the reception in the XVth century of the works from the previous centuries which enables us to establish a guide of the favourite historical books read at the time. Le Traité des Neuf Preux et des Neuf Preuses is at the crossroads of different cultural and literary influences: filling in those well-known names by telling their lives embodies the late medieval sensibility to the links existing between fame, writing and memory.
|
179 |
Contraintes d'anti-filtrage et programmation par réécriture / Anti-matching constraints and programming with rewrite rulesKöpetz, Radu 15 October 2008 (has links)
L’objectif principal de cette thèse est l’étude et la formalisation de nouvelles constructions permettant d’augmenter l’expressivité du filtrage et des langages à base de règles en général. Ceci est motivé par le développement de Tom, un système qui enrichit les langages impératifs comme Java et C avec des constructions de haut niveau comme le filtrage et les stratégies. Une première extension que l’on propose est la notion d’anti-patterns, i.e. des motifs qui peuvent contenir des symboles de complément. Nous définissons de manière formelle la sémantique des anti-patterns dans le cas syntaxique et modulo une théorie équationnelle arbitraire. Puis nous étendons la notion classique de filtrage entre les motifs et les termes clos au filtrage entre les anti-patterns et les termes clos (anti-filtrage). Ensuite, nous proposons plusieurs extensions aux constructions de filtrage fournies par Tom. La condition pour l’application d’une règle devient une conjonction ou disjonction de contraintes de filtrage et d’anti-filtrage ainsi que d’autres types de conditions. Les techniques classiques de compilation du filtrage ne sont pas bien adaptées à ces conditions complexes. On propose donc une nouvelle méthode de compilation basée sur des systèmes de réécriture contrôlés par des stratégies. Nous avons complètement réécrit le compilateur de Tom en utilisant cette technique. Tous ces éléments rassemblés constituent un environnement pour décrire et implémenter des transformations de manière élégante et concise. Pour promouvoir son utilisation dans des projets à grand échelle, on développe une technique pour extraire automatiquement des informations structurelles à partir d’une hiérarchie de classes Java. Cela permet l’intégration du filtrage offert par Tom dans n’importe quelle application Java / The main objective of this thesis is the study of new constructs and formalisms that increase the expressivity of pattern matching and rule based languages in general. This is motivated by the development of Tom, a system that adds high level constructs such as pattern matching and strategies to languages like Java and C. A first extension that we propose is the notion of anti-patterns, i.e. patterns that may contain complement symbols. We define formally the semantics of anti-patterns both in the syntactic case and modulo an arbitrary equational theory. We then extend the classical notion of matching between patterns and ground terms to matching between anti-patterns and ground terms. We further propose several extensions to the matching constructs provided by Tom. Consequently, the condition for the application of a rule becomes a combination of matching and anti-matching constraints together with other types of conditions. Classical compilation techniques for pattern matching are not very well suited for these complex conditions. Therefore we propose a new compilation method based on rewrite systems controlled by strategies, which provides a high level of modularity. Tom’s compiler has been rewritten from scratch using this technique. All this constitutes a software environment for expressing transformations in a clear and concise way. To promote its use in large scale applications, we propose an approach for extracting automatically structural information from arbitrary Java hierarchies. This allows a seamless integration of Tom’s pattern matching facilities in any application
|
180 |
Sections atomiques emboîtées avec échappement de processus légers : sémantiques et compilation / Nested atomic sections with thread escape : semantics and compilationPinsard, Thomas 15 December 2014 (has links)
La mémoire transactionnelle est un mécanisme de plus en plus populaire pour la programmation parallèle et concurrente. Dans la plupart des implantations, l’emboîtement de transactions n’est pas possible ce qui pénalise la modularité. Plutôt que les transactions, qui sont un choix possible d’implantation, nous considérons directement la notion de section atomique. Dans un objectif d’améliorer la modularité et l’expressivité, nous considérons un langage impératif simple étendu avec des instructions de parallélisme avec lancement et attente de processus légers et une instruction de section atomique à portée syntaxique, depuis laquelle des processus légers peuvent s’échapper. Dans ce contexte notre première contribution est la définition précise de l’atomicité et de la bonne synchronisation. Nous prouvons que pour des traces bien formées, la dernière implique la forme forte de la première. Ceci est fait sur des traces d’exécution abstraites dans le sens où nous ne définissons par précisément la syntaxe et la sémantique opérationnelle d’un langage de programmation. Cette première partie de notre travail peut être considérée comme une spécification pour un tel langage. Nous avons utilisé l’assistant de preuve Coq pour modéliser et prouver nos résultats. Notre deuxième contribution est la définition formelle du langage Atomic Fork Join (AFJ). Nous montrons que les traces de sa sémantique opérationnelle vérifient effectivement les conditions de bonne formation définies précédemment. La troisième contribution est la compilation de programmes AFJ en programmes Lock Unlock Fork Join (LUFJ) un langage avec processus léger et verrous mais sans sections atomiques. Nous étudions la correction de la compilation de AFJ vers LUFJ. / Transactions are becoming a popular mechanism for parallel and concurrent programming. In most implementations the nesting of transactions is not supported which hinders modularity. Rather than transactions, which are an implementation choice, we consider directly the notion of atomic section. For the sake of modularity with we consider a simple imperative language with fork/join parallelism and lexically scoped nested atomic sections from which threads can escape. In this context, our first contribution is the precise definition of atomicity, well-synchronisation and the proof that the latter implies the strong form of the former. This is done on execution traces without being specific to a language syntax and operational semantics. This first part of our work could be considered as a specification for the design and implementation of such a parallel language. A formalisation of our results in the Coq proof assistant is also available. Our second contribution is a formal definition of the Atomic Fork Join (AFJ) language and its operational semantics. We show that it indeed satisfies the conditions previously defined. The third contribution of our work is a compilation procedure of AFJ programs to programs another language with threads and locks but without atomic sections, named Lock Unlock Fork Join (LUFJ). We study the correctness of the compilation from AFJ to LUFJ.
|
Page generated in 0.0702 seconds