Spelling suggestions: "subject:"compilation"" "subject:"kompilation""
51 |
Réécriture et compilation de confianceReilles, Antoine Kirchner, Claude January 2006 (has links) (PDF)
Thèse de doctorat : Informatique : INPL : 2006. / Titre provenant de l'écran-titre. Bibliogr.
|
52 |
Applying support vector machines to discover just-in-time method-specific compilation strategiesNabinger Sanchez, Ricardo Unknown Date
No description available.
|
53 |
Applying support vector machines to discover just-in-time method-specific compilation strategiesNabinger Sanchez, Ricardo 11 1900 (has links)
Adaptive Just-in-Time compilers employ multiple techniques to concentrate compilation efforts in the most promising spots of the application, balancing tight compilation budgets with an appropriate level of code quality. Some compiler researchers propose that Just-in-Time compilers should benefit from method-specific compilation strategies. These strategies can be discovered through machine-learning techniques, where a compilation strategy is tailored to a method based on the method's characteristics. This thesis investigates the use of Support Vector Machines in Testarossa, a commercial Just-in-Time compiler employed in the IBM J9 Java Virtual Machine. This new infrastructure allows Testarossa to explore numerous compilation strategies, generating the data needed for training such models. The infrastructure also integrates Testarossa to learned models that predict which compilation strategy balances code quality and compilation effort, on a per-method basis. The thesis also presents the results of an extensive experimental evaluation of the infrastructure and compares these results with the performance of the original Testarossa.
|
54 |
La Compilatio de libris naturalibus Aristotelis et aliorum quorundam philosophorum ou Compendium philosophie : histoire et édition préliminaire partielle d’une compilation philosophique du XIIIe siècle / The Compilatio de libris naturalibus Aristotelis et aliorum quorundam philosophorum or Compendium philosophie : historical study and preliminary partial edition of a philosophical compilation of the XIIIth centuryKuhry, Emmanuelle 10 January 2014 (has links)
Probablement rédigé au milieu du XIIIe siècle dans un milieu proche de celui de l’Université, l’anonyme Compendium philosophie ou Compilatio de libris naturalibus Aristotelis et aliorum quorundam philosophorum donne à voir un abrégé du corpus aristotélicien sur la nature divisé en huit livres. Jamais édité à l’époque moderne et tombé dans l’oubli le plus total après le XVe siècle, le Compendium philosophie a été relativement peu étudié et la seule édition disponible en est une édition partielle, à hauteur d’1/7e du texte, réalisée dans les années 1930 à partir du texte d’un seul manuscrit. Notre travail doctoral a non seulement permis de dégager une liste de 37 manuscrits au total, ce qui laisse imaginer une diffusion relativement efficace, mais encore d’établir que l’oeuvre avait connu au moins quatre versions différentes. Plusieurs éléments nous mettent sur la piste, en ce qui concerne la rédaction, d’un réseau cistercien en rapport avec les études. L’état des sources « philosophiques » du texte laisse supposer, quant à lui, une rédaction en milieu universitaire. La conjonction de ces deux informations nous mènera à formuler une hypothèse originale sur la genèse du texte et son contexte de rédaction. Outre l’enquête sur la tradition manuscrite et les sources du texte, ce travail doctoral tentera de rendre compte de l’état des quatre versions dans une édition critique préliminaire d’une partie des livres sur la philosophie naturelle. / Probably written in the middle of the XIIIth century in a universitarian context, the anonymous compilation known as Compendium philosophie or Compilatio de libris naturalibus Aristotelis is an abbreviation of Aristotle’s corpus on nature and is composed of eight books. Few scholars have been interested in this text and the only edition ever made was a partial one, from only one manuscript. This doctoral work shows that the text is preserved in at least 37 manuscripts, and that it was transmitted in four different versions. Furthermore, deep links with the cistercian order have been discovered, as well as university sources for the philosophical parts of the text. Finally, a critical edition of part of the text will be carried out.
|
55 |
The Design of Intermediate Languages in Optimizing CompilersMaurer, Luke 31 October 2018 (has links)
Every compiler passes code through several stages, each a sort of mini-
compiler of its own. Thus each stage may deal with the code in a different
representation, which may have little to do with the source or target language.
We can describe these in-memory representations as languages in their own right,
which we call intermediate languages.
Each intermediate language is designed to accomodate the stage of
compilation that handles it. Those toward the end of the compilation pipeline,
for instance, tend to have features expressing low-level details of computation.
A subtler case is that of the optimization stage, whose role is to transform the
program so that it runs faster, uses less memory, and so forth. The optimizer faces
tradeoffs: The language should provide enough information to guide optimization
algorithms, but all of this information must be kept up to date as the program is
transformed. Also, establishing invariants in the language can be helpful both in
implementing algorithms and in debugging the implementation, but each invariant
may complicate desirable transformations or rule them out altogether. Finally, a
ivlanguage where the invariants are obviously correct may have a form too awkward
or otherwise unsuited to the compiler’s needs.
Given the properties and invariants that we would like the language to
provide, we can approach the design task in a way that gives these features without
necessarily sacrificing implementability. Namely, begin with a formal language that
makes the desired properties obvious, then translate it to one more suitable for
implementation. We can even translate theorems about valid transformations in the
formal language to derive correct algorithms in the implementation language.
This dissertation explores the connections between different intermediate
languages and how they can be interderived, then demonstrates how translation
lead to an improvement to the Glasgow Haskell Compiler opimization engine.
This dissertation includes previously published coauthored material.
|
56 |
Compilation of Stream Programs onto Embedded Multicore ArchitecturesJanuary 2012 (has links)
abstract: In recent years, we have observed the prevalence of stream applications in many embedded domains. Stream programs distinguish themselves from traditional sequential programming languages through well defined independent actors, explicit data communication, and stable code/data access patterns. In order to achieve high performance and low power, scratch pad memory (SPM) has been introduced in today's embedded multicore processors. Current design frameworks for developing stream applications on SPM enhanced embedded architectures typically do not include a compiler that can perform automatic partitioning, mapping and scheduling under limited on-chip SPM capacities and memory access delays. Consequently, many designs are implemented manually, which leads to lengthy tasks and inferior designs. In this work, optimization techniques that automatically compile stream programs onto embedded multi-core architectures are proposed. As an initial case study, we implemented an automatic target recognition (ATR) algorithm on the IBM Cell Broadband Engine (BE). Then integer linear programming (ILP) and heuristic approaches were proposed to schedule stream programs on a single core embedded processor that has an SPM with code overlay. Later, ILP and heuristic approaches for Compiling Stream programs on SPM enhanced Multicore Processors (CSMP) were studied. The proposed CSMP ILP and heuristic approaches do not optimize for cycles in stream applications. Further, the number of software pipeline stages in the implementation is dependent on actor to processing engine (PE) mapping and is uncontrollable. We next presented a Retiming technique for Throughput optimization on Embedded Multi-core processors (RTEM). RTEM approach inherently handles cycles and can accept an upper bound on the number of software pipeline stages to be generated. We further enhanced RTEM by incorporating unrolling (URSTEM) that preserves all the beneficial properties of RTEM heuristic and also scales with the number of PEs through unrolling. / Dissertation/Thesis / Ph.D. Computer Science 2012
|
57 |
Obfuscation de données pour la protection de programmes contre l'analyse dynamique / Data obfuscation against dynamic program analysisRiaud, Stéphanie 14 December 2015 (has links)
La rétro-conception est une technique qui consiste à analyser un produit afin d'en extraire un secret. Lorsque le produit ciblé est un programme informatique, le rétro-concepteur peut chercher à extraire un algorithme ou tout élément du code de ce programme. L'obfuscation est une technique de protection qui consiste à modifier le code d'un programme afin de le rendre plus difficile à rétro-concevoir. Nous nous intéressons à l'étude et au développement de techniques d'obfuscation de programmes informatiques. Nous avons développé une nouvelle technique d'obfuscation de code, puis nous avons démontré son efficacité et finalement nous avons implémenté une autre technique de protection ayant pour objectif de renforcer la résilience des techniques de protection anti-rétro conception. Nous avons donc, dans un premier temps, imaginé et implémenté une nouvelle technique d'obfuscation permettant de protéger certains éléments spécifiques contenus dans les programmes implémentés en langage C. En nous appuyant sur un état de l'art détaillé des techniques d'analyses utilisées lors de la rétro-conception de programmes, nous avons établi l'efficacité de cette technique de protection. Dans un second temps, nous avons étayé les éléments précédemment établis, en démontrant de façon empirique que cette mesure de protection peut être appliquée sur des programmes concrets. Nous démontrons qu'elle peut être mise en place sur des codes de haut niveau et rester efficace sur les fichiers exécutables obtenus à partir de ces codes. Nous poussons notre analyse jusqu'à démontrer que lorsque le processus d'obfuscation est réalisé de façon scrupuleuse, le temps d'exécution des programmes protégés reste dans le même ordre de grandeur que celui des programmes non protégés. Dans un troisième temps, nous travaillons en avance de phase en développant des mécanismes de protection ciblés, visant à contrer les outils d'analyse automatique utilisés par les rétro-concepteurs. Leur objectif est de renforcer la robustesse des techniques appliquées à haut niveau en augmentant leur furtivité et en fournissant au rétro-concepteur des résultats erronés. Nos contributions couvrent divers sujets liés à la lutte contre la rétro-conception. Nous avons développé et implémenté de nouvelles techniques de protection de code. Lorsque ces techniques de protection s'appliquent à haut niveau, nous avons mis au point un processus permettant de démontrer qu'elles ne perdent pas en efficacité et que leur coût en terme de temps d'exécution reste acceptable. Pour les techniques de protection plus bas niveau que nous avons développées, nous avons démontré leur efficacité face à des outils d'analyse dynamique de code utilisés lors de la rétro-conception. / Reverse engineering is a technique that consists in analyzing a product in order to extract a secret. When a computer program is targeted, the reverse engineer may seek to extract an algorithm code or any component of this program. Obfuscation is a protection technique aimed to make it more difficult to reverse engineer. We are interested in the study and development of obfuscation techniques to protect computer programs. We have developed a new technique of code obfuscation, then we have demonstrated its effectiveness, and finally we implemented another protection technique with the aim of enhance the resilience of anti-reverse engineering protection techniques. So we, initially, designed and implemented a new obfuscation technique to protect certain specific elements contained in the programs implemented in C language. By relying on dynamic analysis techniques, we have established the effectiveness of this protection technique. Secondly, we have backed up previously established elements, by demonstrating empirically that this protection can be applied to concrete programs. We demonstrate that this protection can be placed on high-level codes and remain effective on executable files obtained from these codes. We demonstrate that when the process of obfuscation is realized in a scrupulous way, the execution time of programs remains in the same order as that of the protected programs. Thirdly, we work on developing targeted protection mechanisms to counter automatic analysis tools used by reverse engineers. Their aim is to enhance the robustness of the techniques applied to high level by increasing their stealth and providing fake results for the reverse engineers. Our contributions cover various topics related to protection against reverse engineering. We have developed and implemented new code protection techniques. When these protection techniques are apply to high level, we have developed a process to demonstrate that they do not lose efficiency and their cost in terms of execution time remains acceptable. For the lowest level protection techniques that we have developed, we have demonstrated their effectiveness face of dynamic code analysis tools used in reverse engineering.
|
58 |
IMPROVING PERFORMANCE OF DATA-CENTRIC SYSTEMS THROUGH FINE-GRAINED CODE GENERATIONGregory M Essertel (8158032) 20 December 2019 (has links)
<div>The availability of modern hardware with large amounts of memory created a shift in the development of data-centric software; from optimizing I/O operations to optimizing computation. As a result, the main challenge has become using the memory hierarchy (cache, RAM, distributed, etc) efficiently. In order to overcome this difficulty, programmers of data-centric programs need to use low-level APIs such as Pthreads or MPI to manually optimize their software because of the intrinsic difficulties and the low productivity of these APIs. Data-centric systems such as Apache Spark are becoming more and more popular. These kinds of systems offer a much simpler interface and allow programmers and scientists to write in a few lines what would have been thousands of lines of low-level MPI code. The core benefit of these systems comes from the introduction of deferred APIs; the code written by the programmer is actually building a graph representation of the computation that has to be executed. This graph can then be optimized and compiled to achieve higher performance.</div><div><br></div><div>In this dissertation, we analyze the limitations of current data-centric systems such as Apache Spark, on relational and heterogeneous workloads interacting with machine learning frameworks. We show that the compilation of queries in multiples stages and the interfacing with external systems is a key impediment to performance because of their inability to optimize across code boundaries. We present Flare, an accelerator for data-centric software, which provides performance comparable to the state of the art relational systems while keeping the expressiveness of high-level deferred APIs. Flare displays order of magnitude speed up on programs combining relational processing and machine learning frameworks such as TensorFlow. We look at the impact of compilation on short-running jobs and propose an on-stack-replacement mechanism for generative programming to decrease the overhead introduced by the compilation step. We show that this mechanism can also be used in a more generic way within source-to-source compilers. We develop a new kind of static analysis that allows the reverse engineering of legacy codes in order to optimize them with Flare. The novelty of the analysis is also useful for more generic problems such as formal verification of programs using dynamic allocation. We have implemented a prototype that successfully verifies programs within the SV-COMP benchmark suite.</div>
|
59 |
Runtime of WebAssembly : A study into WebAssembly runtimeEriksson, Adam January 2023 (has links)
WebAssembly is Assembly-like code that is created by compiling other languages into Wasm. The Wasm file can then be run on the web at near native speed. The objective of this study is to find how WebAssemblys runtime compares to JavaScript and native. The study will also see if different browsers impact WebAssembly runtime. To get the information two different methods were used. Firstly literature and articles were used to gather data on JavaScript and native runtime compared to WebAssembly. Secondly an empirical study was conducted to compare four different browsers WebAssembly runtime. When comparing WebAssembly and JavaScript it was found that WebAssembly isn't always the fastest alternative due to many reasons but some major ones were how they were compiled and optimised. When looking at WebAssembly compared to native we could clearly see that WebAssembly was slower. These slowdowns came primarily from the increase in code size but the virtual environment and security checks also contributed to this. After the empirical study we could see some differences between browsers both in compilation speed and execution time. Between the chromium browsers the difference in execution time was very small and Firefox was always faster. But when looking at compilation time Chrome was faster with the other browsers having varying results. The research could conclude that WebAssembly can provide a useful boost to runtime on websites when used correctly. It is not something that is going to replace JavaScript but can be used together with it. We could also conclude that the user's choice of browser has a small impact on WebAssembly and can cause differences in runtime.
|
60 |
Simple Open-Source Formal Verification of Industrial ProgramsPeterson, Christopher Disney 01 March 2023 (has links) (PDF)
Industrial programs written on Programmable Logic Controllers (PLCs) have become an essential component of many modern industries, including automotive, aerospace, manufacturing, infrastructure, and even amusement parks. As these safety-critical systems become larger and more complex, ensuring their continuous error-free operation has become a significant and important challenge. Formal methods are a potential solution to this issue but have traditionally required substantial time and expertise to deploy. This usability issue is compounded by the fact that PLCs are highly proprietary and have substantial licensing costs, making it difficult to learn about or deploy formal methods on them.
This thesis presents the OPPP (Open-source Proving of PLC Programs) system as a solution to this usability issue. The OPPP system allows the end-to-end creation and verification of PLC programs from within the development environment. The system is created with an emphasis on being easy to use, with formal constraints presented in English phrases that require no special knowledge to understand. The system uses entirely open-source components, including modified versions of both the OpenPLC development environment and the PLCverif verification platform. The OPPP system is then demonstrated to formalize the requirements of two college-level introductory PLC programming problems. It is further demonstrated to correctly find errors in and verify the correctness of a known good and known bad solution to each problem.
|
Page generated in 0.0885 seconds