1 |
Vers la compilation vérifiée de Sea of Nodes : propriétés et raisonnement sémantiques / Toward verified compilation of Sea of Nodes : semantic properties and reasoningFernández de Retana, Yon 05 July 2018 (has links)
Les compilateurs optimisants pour les langages de programmation sont devenus des logiciels complexes et donc une source de bugs. Ceci peut être dangereux dans le contexte de systèmes critiques comme l'avionique ou la médecine. Cette thèse s'inscrit dans le cadre de la compilation vérifiée optimisante dont l'objectif est d'assurer l'absence de tels bugs. Plus précisément, nous étudions sémantiquement une représentation intermédiaire SSA (Single Static Assignment) particulière, Sea of Nodes, utilisée notamment dans le compilateur optimisant HotSpot pour Java. La propriété SSA a déjà été étudiée d'un point de vue sémantique sur des représentations simples sous forme de graphe de flot de contrôle, mais le sujet des dépendances entre instructions a seulement été effleuré depuis une perspective formelle. Cette thèse apporte une étude sémantique de transformations de programmes sous forme Sea of Nodes, intégrant la flexibilité en termes de dépendances de données entre instructions. En particulier, élimination de zero-checks redondants, propagation de constantes, retour au bloc de base séquentiel et destruction de SSA sont étudiés. Certains des sujets abordés, dont la formalisation d'une sémantique pour Sea of Nodes, sont accompagnés d'une vérification à l'aide de l'assistant de preuve Coq. / Optimizing compilers for programming languages have become complex software, and they are hence subject to bugs. This can be dangerous in the context of critical systems such as avionics or health care. This thesis is part of research work on verified optimizing compilers, whose objective is to ensure the absence of such bugs. More precisely, we semantically study a particular SSA intermediate representation, Sea of Nodes, which is notably used in the optimizing compiler HotSpot for Java. The SSA property has already been studied from a semantic point of view on simple intermediate representations in control flow graph form, but the subject of dependencies between instructions has just been skimmed from a formal perspective. This thesis brings a semantic study of transformations of programs in Sea of Nodes form, integrating the flexibility regarding data dependencies between instructions. In particular, redundant zero-check elimination, constant propagation, transformation back to sequential basic block, and SSA destruction are studied. Some of the approached topics, including the formalization of a semantics for Sea of Nodes, are accompanied by a verification using the Coq proof assistant.
|
2 |
Speeding up dynamic compilation : concurrent and parallel dynamic compilationBohm, Igor January 2013 (has links)
The main challenge faced by a dynamic compilation system is to detect and translate frequently executed program regions into highly efficient native code as fast as possible. To efficiently reduce dynamic compilation latency, a dynamic compilation system must improve its workload throughput, i.e. compile more application hotspots per time. As time for dynamic compilation adds to the overall execution time, the dynamic compiler is often decoupled and operates in a separate thread independent from the main execution loop to reduce the overhead of dynamic compilation. This thesis proposes innovative techniques aimed at effectively speeding up dynamic compilation. The first contribution is a generalised region recording scheme optimised for program representations that require dynamic code discovery (e.g. binary program representations). The second contribution reduces dynamic compilation cost by incrementally compiling several hot regions in a concurrent and parallel task farm. Altogether the combination of generalised light-weight code discovery, large translation units, dynamic work scheduling, and concurrent and parallel dynamic compilation ensures timely and efficient processing of compilation workloads. Compared to state-of-the-art dynamic compilation approaches, speedups of up to 2.08 are demonstrated for industry standard benchmarks such as BioPerf, Spec Cpu 2006, and Eembc. Next, innovative applications of the proposed dynamic compilation scheme to speed up architectural and micro-architectural performance modelling are demonstrated. The main contribution in this context is to exploit runtime information to dynamically generate optimised code that accurately models architectural and micro-architectural components. Consequently, compilation units are larger and more complex resulting in increased compilation latencies. Large and complex compilation units present an ideal use case for our concurrent and parallel dynamic compilation infrastructure. We demonstrate that our novel micro-architectural performance modelling is faster than state-of-the-art Fpga-based simulation, whilst providing the same level of accuracy.
|
3 |
Architectural support for compilationPalmer, Ian James January 1996 (has links)
No description available.
|
4 |
Génération de programmes modèles pour la représentation et l'analyse de profils d'exécution le modèle périodique-linéaire /Kenmei Youta, Bénédicte Ramelie Clauss, Philippe. January 2006 (has links) (PDF)
Thèse doctorat : Informatique : Strasbourg 1 : 2006. / Titre provenant de l'écran-titre. Bibliogr. 6 p.
|
5 |
Spécification multidimensionnelle pour le traitement du signal systématiqueDumont, Philippe Boulet, Pierre. January 2007 (has links)
Reproduction de : Thèse de doctorat : Informatique : Lille 1 : 2005. / N° d'ordre (Lille 1) : 3756. Résumé en français et en anglais. Titre provenant de la page de titre du document numérisé. Bibliogr. p. 119-122.
|
6 |
Denotational Translation ValidationGovereau, Paul 02 January 2013 (has links)
In this dissertation we present a simple and scalable system for validating the correctness of low-level program transformations. Proving that program transformations are correct is crucial to the development of security critical software tools. We achieve a simple and scalable design by compiling sequential low-level programs to synchronous data-flow programs. Theses data-flow programs are a denotation of the original programs, representing all of the relevant aspects of the program semantics. We then check that the two denotations are equivalent, which implies that the program transformation is semantics preserving. Our denotations are computed by means of symbolic analysis. In order to achieve our design, we have extended symbolic analysis to arbitrary control-flow graphs. To this end, we have designed an intermediate language called Synchronous Value Graphs (SVG), which is capable of representing our denotations for arbitrary control-flow graphs, we have built an algorithm for computing SVG from normal assembly language, and we have given a formal model of SVG which allows us to simplify and compare denotations. Finally, we report on our experiments with LLVM M.D., a prototype denotational translation validator for the LLVM optimization framework. / Engineering and Applied Sciences
|
7 |
The Essence of Codata and Its ImplementationsSullivan, Zachary 06 September 2018 (has links)
Data types are a widely-used feature of functional programming languages that
allow programmers to create abstractions and control branching
computations. Instances of data types are introduced by applying one of a
disjoint set of constructors and are eliminated by pattern matching on the
constructor used. Dually, codata types are defined by their destructors, are
introduced by copattern matching on their context, and eliminated by applying
destructors.
We extend motivation for codata types to include adding types that satisfy the
extensional laws and adding an abstraction for constraining clients of code. We
also improve on work implementing codata by developing an untyped compilation
technique for codata that works for both call-by-name and call-by-value
evaluation strategies and scales to simple and indexed type systems. We
demonstrate the practicality of our technique by implementing a prototype
compiler and a Haskell language extension.
|
8 |
Générateur de code multi-temps et optimisation de code multi-objectifs / Multi-time code generation and multi-objective code optimisationLomüller, Victor 12 November 2014 (has links)
La compilation est une étape indispensable dans la création d'applications performantes.Cette étape autorise l'utilisation de langages de haut niveau et indépendants de la cible tout en permettant d'obtenir de bonnes performances.Cependant, de nombreux freins empêchent les compilateurs d'optimiser au mieux les applications.Pour les compilateurs statiques, le frein majeur est la faible connaissance du contexte d'exécution, notamment sur l'architecture et les données utilisées.Cette connaissance du contexte se fait progressivement pendant le cycle de vie de l'application.Pour tenter d'utiliser au mieux les connaissances du contexte d'exécution, les compilateurs ont progressivement intégré des techniques de génération de code dynamique.Cependant ces techniques ne se focalisent que sur l'utilisation optimale du matériel et n'utilisent que très peu les données.Dans cette thèse, nous nous intéressons à l'utilisation des données dans le processus d'optimisation d'applications pour GPU Nvidia.Nous proposons une méthode utilisant différents moments pour créer des bibliothèques adaptatives capables de prendre en compte la taille des données.Ces bibliothèques peuvent alors fournir les noyaux de calcul les plus adapté au contexte.Sur l'algorithme de la GEMM, la méthode permet d'obtenir des gains pouvant atteindre 100~\% tout en évitant une explosion de la taille du code.La thèse s'intéresse également aux gains et coûts de la génération de code lors de l'exécution, et ce du point de vue de la vitesse d'exécution, de l'empreinte mémoire et de la consommation énergétique.Nous proposons et étudions 2 approches de génération de code à l'exécution permettant la spécialisation de code avec un faible surcoût.Nous montrons que ces 2 approches permettent d'obtenir des gains en vitesse et en consommation comparables, voire supérieurs, à LLVM mais avec un coût moindre. / Compilation is an essential step to create efficient applications.This step allows the use of high-level and target independent languages while maintaining good performances.However, many obstacle prevent compilers to fully optimize applications.For static compilers, the major obstacle is the poor knowledge of the execution context, particularly knowledge on the architecture and data.This knowledge is progressively known during the application life cycle.Compilers progressively integrated dynamic code generation techniques to be able to use this knowledge.However, those techniques usually focuses on improvement of hardware capabilities usage but don't take data into account.In this thesis, we investigate data usage in applications optimization process on Nvidia GPU.We present a method that uses different moments in the application life cycle to create adaptive libraries able to take into account data size.Those libraries can therefore provide more adapted kernels.With the GEMM algorithm, the method is able to provide gains up to 100~\% while avoiding code size explosion.The thesis also investigate runtime code generation gains and costs from the execution speed, memory footprint and energy consumption point of view.We present and study 2 light-weight runtime code generation approaches that can specialize code.We show that those 2 approaches can obtain comparable, and even superior, gains compared to LLVM but at a lower cost.
|
9 |
Génération de code asynchrone dans un environnement polychrone pour la production de systèmes GALSOuy, Julien Talpin, Jean-Pierre. January 2008 (has links) (PDF)
Thèse doctorat : Informatique : Rennes 1 : 2008. / Bibliogr. p. 107-110.
|
10 |
Dataflow Analysis and Optimization of High Level Language Code for Hardware-Software Co-DesignO'Connor, R. Brendan 07 May 1996 (has links)
Recent advancements in FPGA technology have provided devices which are not only suited for digital logic prototyping, but also are capable of implementing complex computations. The use of these devices in multi-FPGA Custom Computing Machines (CCMs) has provided the potential to execute large sections of programs entirely in custom hardware which can provide a substantial speedup over execution in a general-purpose sequential processor. Unfortunately, the development tools currently available for CCMs do not allow users to easily configure multi-FPGA platforms. In order to exploit the capabilities of such an architecture, a procedure has been developed to perform a dataflow analysis of programs written in C which is capable of several hardware-specific optimizations. This, together with other software tools developed for this purpose, allows CCMs and their host processors to be targeted from the same high-level specification. / Master of Science
|
Page generated in 0.1125 seconds