51 |
Incremental Compilation and Dynamic Loading of Functions in OpenModelicaKlinghed, Joel, Jansson, Kim January 2008 (has links)
<p>Advanced development environments are essential for efficient realization of complex industrial products. Powerful equation-based object-oriented (EOO) languages such as Modelica are successfully used for modeling and virtual prototyping complex physical systems and components. The Modelica language enables engineers to build large, sophisticated and complex models. Modelica environments should scale up and be able to handle these large models. This thesis addresses the scalability of Modelica tools by employing incremental compilation and dynamic loading. The design, implementation and evaluation of this approach is presented. OpenModelica is an open-source Modelica environment developed at PELAB in which we have implemented our strategy for incremental compilation and dynamic loading of functions. We have tested the performance of these strategies in a number of different scenarios in order to see how much of an impact they have on the compilation and execution time.</p><p>Our solution contains an overhead of one or two hash calls during runtime as it uses dynamic hashes instead of static arrays.</p>
|
52 |
Génération de code réparti par distribution de donnéesPazat, Jean-Louis 27 November 1997 (has links) (PDF)
Ce document décrit les méthodes de compilation et d'exécution pour la génération automatique de code distribué par distribution de données.
|
53 |
Réécriture et compilation de confianceReilles, Antoine Kirchner, Claude January 2006 (has links) (PDF)
Thèse de doctorat : Informatique : INPL : 2006. / Titre provenant de l'écran-titre. Bibliogr.
|
54 |
Applying support vector machines to discover just-in-time method-specific compilation strategiesNabinger Sanchez, Ricardo Unknown Date
No description available.
|
55 |
Applying support vector machines to discover just-in-time method-specific compilation strategiesNabinger Sanchez, Ricardo 11 1900 (has links)
Adaptive Just-in-Time compilers employ multiple techniques to concentrate compilation efforts in the most promising spots of the application, balancing tight compilation budgets with an appropriate level of code quality. Some compiler researchers propose that Just-in-Time compilers should benefit from method-specific compilation strategies. These strategies can be discovered through machine-learning techniques, where a compilation strategy is tailored to a method based on the method's characteristics. This thesis investigates the use of Support Vector Machines in Testarossa, a commercial Just-in-Time compiler employed in the IBM J9 Java Virtual Machine. This new infrastructure allows Testarossa to explore numerous compilation strategies, generating the data needed for training such models. The infrastructure also integrates Testarossa to learned models that predict which compilation strategy balances code quality and compilation effort, on a per-method basis. The thesis also presents the results of an extensive experimental evaluation of the infrastructure and compares these results with the performance of the original Testarossa.
|
56 |
La Compilatio de libris naturalibus Aristotelis et aliorum quorundam philosophorum ou Compendium philosophie : histoire et édition préliminaire partielle d’une compilation philosophique du XIIIe siècle / The Compilatio de libris naturalibus Aristotelis et aliorum quorundam philosophorum or Compendium philosophie : historical study and preliminary partial edition of a philosophical compilation of the XIIIth centuryKuhry, Emmanuelle 10 January 2014 (has links)
Probablement rédigé au milieu du XIIIe siècle dans un milieu proche de celui de l’Université, l’anonyme Compendium philosophie ou Compilatio de libris naturalibus Aristotelis et aliorum quorundam philosophorum donne à voir un abrégé du corpus aristotélicien sur la nature divisé en huit livres. Jamais édité à l’époque moderne et tombé dans l’oubli le plus total après le XVe siècle, le Compendium philosophie a été relativement peu étudié et la seule édition disponible en est une édition partielle, à hauteur d’1/7e du texte, réalisée dans les années 1930 à partir du texte d’un seul manuscrit. Notre travail doctoral a non seulement permis de dégager une liste de 37 manuscrits au total, ce qui laisse imaginer une diffusion relativement efficace, mais encore d’établir que l’oeuvre avait connu au moins quatre versions différentes. Plusieurs éléments nous mettent sur la piste, en ce qui concerne la rédaction, d’un réseau cistercien en rapport avec les études. L’état des sources « philosophiques » du texte laisse supposer, quant à lui, une rédaction en milieu universitaire. La conjonction de ces deux informations nous mènera à formuler une hypothèse originale sur la genèse du texte et son contexte de rédaction. Outre l’enquête sur la tradition manuscrite et les sources du texte, ce travail doctoral tentera de rendre compte de l’état des quatre versions dans une édition critique préliminaire d’une partie des livres sur la philosophie naturelle. / Probably written in the middle of the XIIIth century in a universitarian context, the anonymous compilation known as Compendium philosophie or Compilatio de libris naturalibus Aristotelis is an abbreviation of Aristotle’s corpus on nature and is composed of eight books. Few scholars have been interested in this text and the only edition ever made was a partial one, from only one manuscript. This doctoral work shows that the text is preserved in at least 37 manuscripts, and that it was transmitted in four different versions. Furthermore, deep links with the cistercian order have been discovered, as well as university sources for the philosophical parts of the text. Finally, a critical edition of part of the text will be carried out.
|
57 |
The Design of Intermediate Languages in Optimizing CompilersMaurer, Luke 31 October 2018 (has links)
Every compiler passes code through several stages, each a sort of mini-
compiler of its own. Thus each stage may deal with the code in a different
representation, which may have little to do with the source or target language.
We can describe these in-memory representations as languages in their own right,
which we call intermediate languages.
Each intermediate language is designed to accomodate the stage of
compilation that handles it. Those toward the end of the compilation pipeline,
for instance, tend to have features expressing low-level details of computation.
A subtler case is that of the optimization stage, whose role is to transform the
program so that it runs faster, uses less memory, and so forth. The optimizer faces
tradeoffs: The language should provide enough information to guide optimization
algorithms, but all of this information must be kept up to date as the program is
transformed. Also, establishing invariants in the language can be helpful both in
implementing algorithms and in debugging the implementation, but each invariant
may complicate desirable transformations or rule them out altogether. Finally, a
ivlanguage where the invariants are obviously correct may have a form too awkward
or otherwise unsuited to the compiler’s needs.
Given the properties and invariants that we would like the language to
provide, we can approach the design task in a way that gives these features without
necessarily sacrificing implementability. Namely, begin with a formal language that
makes the desired properties obvious, then translate it to one more suitable for
implementation. We can even translate theorems about valid transformations in the
formal language to derive correct algorithms in the implementation language.
This dissertation explores the connections between different intermediate
languages and how they can be interderived, then demonstrates how translation
lead to an improvement to the Glasgow Haskell Compiler opimization engine.
This dissertation includes previously published coauthored material.
|
58 |
Compilation of Stream Programs onto Embedded Multicore ArchitecturesJanuary 2012 (has links)
abstract: In recent years, we have observed the prevalence of stream applications in many embedded domains. Stream programs distinguish themselves from traditional sequential programming languages through well defined independent actors, explicit data communication, and stable code/data access patterns. In order to achieve high performance and low power, scratch pad memory (SPM) has been introduced in today's embedded multicore processors. Current design frameworks for developing stream applications on SPM enhanced embedded architectures typically do not include a compiler that can perform automatic partitioning, mapping and scheduling under limited on-chip SPM capacities and memory access delays. Consequently, many designs are implemented manually, which leads to lengthy tasks and inferior designs. In this work, optimization techniques that automatically compile stream programs onto embedded multi-core architectures are proposed. As an initial case study, we implemented an automatic target recognition (ATR) algorithm on the IBM Cell Broadband Engine (BE). Then integer linear programming (ILP) and heuristic approaches were proposed to schedule stream programs on a single core embedded processor that has an SPM with code overlay. Later, ILP and heuristic approaches for Compiling Stream programs on SPM enhanced Multicore Processors (CSMP) were studied. The proposed CSMP ILP and heuristic approaches do not optimize for cycles in stream applications. Further, the number of software pipeline stages in the implementation is dependent on actor to processing engine (PE) mapping and is uncontrollable. We next presented a Retiming technique for Throughput optimization on Embedded Multi-core processors (RTEM). RTEM approach inherently handles cycles and can accept an upper bound on the number of software pipeline stages to be generated. We further enhanced RTEM by incorporating unrolling (URSTEM) that preserves all the beneficial properties of RTEM heuristic and also scales with the number of PEs through unrolling. / Dissertation/Thesis / Ph.D. Computer Science 2012
|
59 |
Obfuscation de données pour la protection de programmes contre l'analyse dynamique / Data obfuscation against dynamic program analysisRiaud, Stéphanie 14 December 2015 (has links)
La rétro-conception est une technique qui consiste à analyser un produit afin d'en extraire un secret. Lorsque le produit ciblé est un programme informatique, le rétro-concepteur peut chercher à extraire un algorithme ou tout élément du code de ce programme. L'obfuscation est une technique de protection qui consiste à modifier le code d'un programme afin de le rendre plus difficile à rétro-concevoir. Nous nous intéressons à l'étude et au développement de techniques d'obfuscation de programmes informatiques. Nous avons développé une nouvelle technique d'obfuscation de code, puis nous avons démontré son efficacité et finalement nous avons implémenté une autre technique de protection ayant pour objectif de renforcer la résilience des techniques de protection anti-rétro conception. Nous avons donc, dans un premier temps, imaginé et implémenté une nouvelle technique d'obfuscation permettant de protéger certains éléments spécifiques contenus dans les programmes implémentés en langage C. En nous appuyant sur un état de l'art détaillé des techniques d'analyses utilisées lors de la rétro-conception de programmes, nous avons établi l'efficacité de cette technique de protection. Dans un second temps, nous avons étayé les éléments précédemment établis, en démontrant de façon empirique que cette mesure de protection peut être appliquée sur des programmes concrets. Nous démontrons qu'elle peut être mise en place sur des codes de haut niveau et rester efficace sur les fichiers exécutables obtenus à partir de ces codes. Nous poussons notre analyse jusqu'à démontrer que lorsque le processus d'obfuscation est réalisé de façon scrupuleuse, le temps d'exécution des programmes protégés reste dans le même ordre de grandeur que celui des programmes non protégés. Dans un troisième temps, nous travaillons en avance de phase en développant des mécanismes de protection ciblés, visant à contrer les outils d'analyse automatique utilisés par les rétro-concepteurs. Leur objectif est de renforcer la robustesse des techniques appliquées à haut niveau en augmentant leur furtivité et en fournissant au rétro-concepteur des résultats erronés. Nos contributions couvrent divers sujets liés à la lutte contre la rétro-conception. Nous avons développé et implémenté de nouvelles techniques de protection de code. Lorsque ces techniques de protection s'appliquent à haut niveau, nous avons mis au point un processus permettant de démontrer qu'elles ne perdent pas en efficacité et que leur coût en terme de temps d'exécution reste acceptable. Pour les techniques de protection plus bas niveau que nous avons développées, nous avons démontré leur efficacité face à des outils d'analyse dynamique de code utilisés lors de la rétro-conception. / Reverse engineering is a technique that consists in analyzing a product in order to extract a secret. When a computer program is targeted, the reverse engineer may seek to extract an algorithm code or any component of this program. Obfuscation is a protection technique aimed to make it more difficult to reverse engineer. We are interested in the study and development of obfuscation techniques to protect computer programs. We have developed a new technique of code obfuscation, then we have demonstrated its effectiveness, and finally we implemented another protection technique with the aim of enhance the resilience of anti-reverse engineering protection techniques. So we, initially, designed and implemented a new obfuscation technique to protect certain specific elements contained in the programs implemented in C language. By relying on dynamic analysis techniques, we have established the effectiveness of this protection technique. Secondly, we have backed up previously established elements, by demonstrating empirically that this protection can be applied to concrete programs. We demonstrate that this protection can be placed on high-level codes and remain effective on executable files obtained from these codes. We demonstrate that when the process of obfuscation is realized in a scrupulous way, the execution time of programs remains in the same order as that of the protected programs. Thirdly, we work on developing targeted protection mechanisms to counter automatic analysis tools used by reverse engineers. Their aim is to enhance the robustness of the techniques applied to high level by increasing their stealth and providing fake results for the reverse engineers. Our contributions cover various topics related to protection against reverse engineering. We have developed and implemented new code protection techniques. When these protection techniques are apply to high level, we have developed a process to demonstrate that they do not lose efficiency and their cost in terms of execution time remains acceptable. For the lowest level protection techniques that we have developed, we have demonstrated their effectiveness face of dynamic code analysis tools used in reverse engineering.
|
60 |
IMPROVING PERFORMANCE OF DATA-CENTRIC SYSTEMS THROUGH FINE-GRAINED CODE GENERATIONGregory M Essertel (8158032) 20 December 2019 (has links)
<div>The availability of modern hardware with large amounts of memory created a shift in the development of data-centric software; from optimizing I/O operations to optimizing computation. As a result, the main challenge has become using the memory hierarchy (cache, RAM, distributed, etc) efficiently. In order to overcome this difficulty, programmers of data-centric programs need to use low-level APIs such as Pthreads or MPI to manually optimize their software because of the intrinsic difficulties and the low productivity of these APIs. Data-centric systems such as Apache Spark are becoming more and more popular. These kinds of systems offer a much simpler interface and allow programmers and scientists to write in a few lines what would have been thousands of lines of low-level MPI code. The core benefit of these systems comes from the introduction of deferred APIs; the code written by the programmer is actually building a graph representation of the computation that has to be executed. This graph can then be optimized and compiled to achieve higher performance.</div><div><br></div><div>In this dissertation, we analyze the limitations of current data-centric systems such as Apache Spark, on relational and heterogeneous workloads interacting with machine learning frameworks. We show that the compilation of queries in multiples stages and the interfacing with external systems is a key impediment to performance because of their inability to optimize across code boundaries. We present Flare, an accelerator for data-centric software, which provides performance comparable to the state of the art relational systems while keeping the expressiveness of high-level deferred APIs. Flare displays order of magnitude speed up on programs combining relational processing and machine learning frameworks such as TensorFlow. We look at the impact of compilation on short-running jobs and propose an on-stack-replacement mechanism for generative programming to decrease the overhead introduced by the compilation step. We show that this mechanism can also be used in a more generic way within source-to-source compilers. We develop a new kind of static analysis that allows the reverse engineering of legacy codes in order to optimize them with Flare. The novelty of the analysis is also useful for more generic problems such as formal verification of programs using dynamic allocation. We have implemented a prototype that successfully verifies programs within the SV-COMP benchmark suite.</div>
|
Page generated in 0.0755 seconds