351 |
Interpret dynamického programovacího jazyka pro vědecké výpočty / Interpreter of a Dynamic Programming Language for Scientific ComputingOcelík, Tomáš January 2012 (has links)
The master's thesis deals with design of a dynamic reflective prototype-based language. First, basic principles of this language group are explained and well known representatives are described. Then languages for scientific computing are shortly discussed. Next section of the thesis describes in detail the proposed programming language, its grammar and semantics. Principles of type checking and inheritance are explained. Thesis also demonstrates implementation of basic control structures known from other languages. Next section shows design of virtual machine for the language described before. Section explains used computational model, organization of the object memory and internal representation of important structures of the designed language. Finally, dynamic type checking, compiler and compilation of typical structures to the virtual machine internal code are discussed.
|
352 |
Optimalizace překladu agentních jazyků různé úrovně abstrakce / Optimalisation of Agent Languages CompilerKalmár, Róbert January 2012 (has links)
The aim of this work is an optimization of AHLL language compiler. Several intermediate representations of compiled code along with code optimization techniques are introduced. The main part of the work is focused on implementing these optimization techniques and generation of the target code in ALLL language. At the end of the work, the results achieved by new version of AHLL compiler are presented. In addition, there are also presented some ideas for the future work on AHLL and the compiler.
|
353 |
Metody detekce funkcí při zpětném překladu kódu / Functions Detection in DecompilationKábele, Břetislav January 2012 (has links)
This work describes methods of functions detection in decompilation. It contains basic information about reverse engineering and its applications in computer science and beyond. Decompiler developed by research group Lissom at FIT VUT Brno is introduced. The main objective is to elucidate several methods of functions detection, discuss their advantages and disadvantages, and identify the problems of functions detection. After detecting the start, end and body of function, it is needed to find the parameters and return values. There are some algorithms presented in this area. The output of this thesis is design and implementation of architecture independent function detection and parameter detection.
|
354 |
Die C# Schnittstelle der Referenzattributgrammatik-gesteuerten Graphersetzungsbibliothek RACR: Übersicht, Anwendung und Implementierung: EntwicklerhandbuchLangner, Daniel, Bürger, Christoff 04 July 2018 (has links)
Dieser Bericht präsentiert RACR-NET, eine Schnittstelle der Referenzattributgrammatik-gesteuerten Graphersetzungsbibliothek RACR für C#.
RACR-NET ermöglicht die Nutzung der deklarativen, dynamischen Sprachspezifikations-, Instanziierungs- und Auswertungsmeachanismen der RACR Scheme-Bibliothek in der objektorientierten Programmierung. Dies umfasst insbesondere die automatische inkrementelle Auswertung attributbasierter semantischer Analysen und somit das automatische Cachen parametrisierter Funktionsmethoden. Graphersetzungen entsprechen hierbei Zustandsänderungen von Objektinstanzen und der Invalidierung abgeleiteter Berechnungen.
Schwerpunkt dieses Berichts ist die objektorientierte Programmierschnittstelle von RACR-NET, dessen praktische Anwendung und Implementierung. Der Bericht ist ein Referenzhandbuch für RACR-NET Anwender und Entwickler.:1. Einleitung
1.1. Aufgabenstellung
1.2. Struktur der Arbeit
2. Konzeptionelle und technische Voraussetzungen
2.1. Überblick der RAG-gesteuerten Graphersetzung
2.2. Scheme
2.3. Die RACR Scheme-Bibliothek
2.4. Das .NET-Framework und die Common Language Infrastructure
2.5. IronScheme
3. RACR-NET Implementierung: Prozedurale Schnittstelle
3.1. Scheme in C#
3.2. RACR in C#
3.3. Anforderungsanalyse
3.4. Implementierung der prozeduralen Schnittstelle
4. RACR-NET Implementierung: Objektorientierte Schnittstelle
4.1. Überblick über die objektorientierte Schnittstelle
4.2. Anwendungsbeispiel
4.3. Herausforderungen bei der Implementierung
4.4. Implementierung
5. Evaluation
5.1. Testen der Schnittstelle
5.2. Performance-Messungen und -Vergleiche
6. Zusammenfassung und Ausblick
6.1. Eine objektorientierte Bibliothek für RAG-gesteuerte Graphersetzung
6.2. Zukünftige Arbeiten
A. Literaturverzeichnis
B. MIT Lizenz
|
355 |
Reusable semantics for implementation of Python optimizing compilersMelançon, Olivier 08 1900 (has links)
Le langage de programmation Python est aujourd'hui parmi les plus populaires au monde grâce à son accessibilité ainsi que l'existence d'un grand nombre de librairies standards. Paradoxalement, Python est également reconnu pour ses performances médiocres lors de l'exécution de nombreuses tâches. Ainsi, l'écriture d’implémentations efficaces du langage est nécessaire. Elle est toutefois freinée par la sémantique complexe de Python, ainsi que par l’absence de sémantique formelle officielle.
Pour régler ce problème, nous présentons une sémantique formelle pour Python axée sur l’implémentation de compilateurs optimisants. Cette sémantique est écrite de manière à pouvoir être intégrée et analysée aisément par des compilateurs déjà existants.
Nous introduisons également semPy, un évaluateur partiel de notre sémantique formelle. Celui-ci permet d'identifier et de retirer automatiquement certaines opérations redondantes dans la sémantique de Python. Ce faisant, semPy génère une sémantique naturellement plus performante lorsqu'exécutée.
Nous terminons en présentant Zipi, un compilateur optimisant pour le langage Python développé avec l'assistance de semPy. Sur certaines tâches, Zipi offre des performances compétitionnant avec celle de PyPy, un compilateur Python reconnu pour ses bonnes performances. Ces résultats ouvrent la porte à des optimisations basées sur une évaluation partielle générant une implémentation spécialisée pour les cas d'usage fréquent du langage. / Python is among the most popular programming language in the world due to its accessibility and extensive standard library. Paradoxically, Python is also known for its poor performance on many tasks. Hence, more efficient implementations of the language are required. The development of such optimized implementations is nevertheless hampered by the complex semantics of Python and the lack of an official formal semantics. We address this issue by presenting a formal semantics for Python focussed on the development of optimizing compilers. This semantics is written as to be easily reusable by existing compilers. We also introduce semPy, a partial evaluator of our formal semantics. This tool allows to automatically target and remove redundant operations from the semantics of Python. As such, semPy generates a semantics which naturally executes more efficiently. Finally, we present Zipi, a Python optimizing compiler developped with the aid of semPy. On some tasks, Zipi displays performance competing with those of PyPy, a Python compiler known for its good performance. These results open the door to optimizations based on a partial evaluation technique which generates specialized implementations for frequent use cases.
|
356 |
Crowdtuning : towards practical and reproducible auto-tuning via crowdsourcing and predictive analytics / Crowdtuning : towards practical and reproducible auto-tuning via crowdsourcing and predictive analytictMemon, Abdul Wahid 17 June 2016 (has links)
Le réglage des heuristiques d'optimisation de compilateur pour de multiples cibles ou implémentations d’une même architecture est devenu complexe. De plus, ce problème est généralement traité de façon ad-hoc et consomme beaucoup de temps sans être nécessairement reproductible. Enfin, des erreurs de choix de paramétrage d’heuristiques sont fréquentes en raison du grand nombre de possibilités d’optimisation et des interactions complexes entre tous les composants matériels et logiciels. La prise en compte de multiples exigences, comme la performance, la consommation d'énergie, la taille de code, la fiabilité et le coût, peut aussi nécessiter la gestion de plusieurs solutions candidates. La compilation itérative avec profil d’exécution (profiling feedback), le réglage automatique (auto tuning) et l'apprentissage automatique ont montré un grand potentiel pour résoudre ces problèmes. Par exemple, nous les avons utilisés avec succès pour concevoir le premier compilateur qui utilise l'apprentissage pour l'optimisation automatique de code. Il s'agit du compilateur Milepost GCC, qui apprend automatiquement les meilleures optimisations pour plusieurs programmes, données et architectures en se basant sur les caractéristiques statiques et dynamiques du programme. Malheureusement, son utilisation en pratique, a été très limitée par le temps d'apprentissage très long et le manque de benchmarks et de données représentatives. De plus, les modèles d'apprentissage «boîte noire» ne pouvaient pas représenter de façon pertinente les corrélations entre les caractéristiques des programme ou architectures et les meilleures optimisations. Dans cette thèse, nous présentons une nouvelle méthodologie et un nouvel écosystème d’outils(framework) sous la nomination Collective Mind (cM). L’objectif est de permettre à la communauté de partager les différents benchmarks, données d’entrée, compilateurs, outils et autres objets tout en formalisant et facilitant la contribution participative aux boucles d’apprentissage. Une contrainte est la reproductibilité des expérimentations pour l’ensemble des utilisateurs et plateformes. Notre cadre de travail open-source et notre dépôt (repository) public permettent de rendre le réglage automatique et l'apprentissage d’optimisations praticable. De plus, cM permet à la communauté de valider les résultats, les comportements inattendus et les modèles conduisant à de mauvaises prédictions. cM permet aussi de fournir des informations utiles pour l'amélioration et la personnalisation des modules de réglage automatique et d'apprentissage ainsi que pour l'amélioration des modèles de prévision et l'identification des éléments manquants. Notre analyse et évaluation du cadre de travail proposé montre qu'il peut effectivement exposer, isoler et identifier de façon collaborative les principales caractéristiques qui contribuent à la précision de la prédiction du modèle. En même temps, la formalisation du réglage automatique et de l'apprentissage nous permettent d'appliquer en permanence des techniques standards de réduction de complexité. Ceci permet de se contenter d'un ensemble minimal d'optimisations pertinentes ainsi que de benchmarks et de données d’entrée réellement représentatifs. Nous avons publié la plupart des résultats expérimentaux, des benchmarks et des données d’entrée à l'adresse http://c-mind.org tout en validant nos techniques dans le projet EU FP6 Milepost et durant un stage de thèse HiPEAC avec STMicroelectronics. / Tuning general compiler optimization heuristics or optimizing software for rapidly evolving hardware has become intolerably complex, ad-hoc, time consuming and error prone due to enormous number of available design and optimization choices, complex interactions between all software and hardware components, and multiple strict requirements placed on performance, power consumption, size, reliability and cost. Iterative feedback-directed compilation, auto-tuning and machine learning have been showing a high potential to solve above problems. For example, we successfully used them to enable the world's first machine learning based self-tuning compiler, Milepost GCC, which automatically learns the best optimizations across multiple programs, data sets and architectures based on static and dynamic program features. Unfortunately, its practical use was very limited by very long training times and lack of representative benchmarks and data sets. Furthermore, "black box" machine learning models alone could not get full insight into correlations between features and best optimizations. In this thesis, we present the first to our knowledge methodology and framework, called Collective Mind (cM), to let the community share various benchmarks, data sets, compilers, tools and other artifacts while formalizing and crowdsourcing optimization and learning in reproducible way across many users (platforms). Our open-source framework and public optimization repository helps make auto-tuning and machine learning practical. Furthermore, cM let the community validate optimization results, share unexpected run-time behavior or model mispredictions, provide useful feedback for improvement, customize common auto-tuning and learning modules, improve predictive models and find missing features. Our analysis and evaluation of the proposed framework demonstrates that it can effectively expose, isolate and collaboratively identify the key features that contribute to the model prediction accuracy. At the same time, formalization of auto-tuning and machine learning allows us to continuously apply standard complexity reduction techniques to leave a minimal set of influential optimizations and relevant features as well as truly representative benchmarks and data sets. We released most of the experimental results, benchmarks and data sets at http://c-mind.org while validating our techniques in the EU FP6 MILEPOST project and during HiPEAC internship at STMicroelectronics.
|
357 |
Systémy kombinující automaty a gramatiky / Systems that Combine Automata and GrammarsPetřík, Patrik January 2009 (has links)
This work deals with Systems that combine automata and grammars. We investigate their properties compared with grammar systems and automaton systems. Work is focused on systems, which components are finite state automata, right linear grammars, pushdown automata or context free grammars. We also investigate usage of these systems in compilers.
|
358 |
Parallel Query Systems : Demand-Driven Incremental Compilers / En arkitektur för parallella och inkrementella kompilatorerNolander, Christofer January 2023 (has links)
Query systems were recently introduced as an architecture for constructing compilers, and have shown to enable fast and efficient incremental compilation, where results from previous builds is reused to accelerate future builds. With this architecture, a compiler is composed of several queries, each of which extracts a small piece of information about the source program. For example, one query might determine the type of a variable, and another the list of functions defined in some file. The dependencies of a query, which includes other queries or files on disk, are automatically recorded at runtime. With these dependencies, query systems can detect changes in their inputs and incorporate them into the final output, while reusing old results from queries which have not changed. This reduces the amount of work needed to recompile code, which saves both time and energy. We present a new parallel execution model for query systems using work-stealing, which dynamically balances the workload across multiple threads. This is facilitated by various augmentations to existing algorithms to allow concurrent operations. Furthermore, we introduce a novel data structure that accelerates incremental compilation for common use cases. We evaluated the impact of these augmentations by implementing a compiler frontend capable of parsing and type-checking the Go programming language. We demonstrate a 10x reduction in compile times using the parallel execution mode. Finally, under certain common conditions, we show a 5x reduction in incremental compile times compared to the state-of-the-art. / Query-system är en ny arkitektur som har använts för att implementera kompilatorer för programspråk och har ett fokus på att möjliggöra snabb och effektiv inkrementell kompilering. Med denna arkitektur består en kompilator flera olika mindre funktioner, som var och en svarar på en liten fråga om källprogrammet, såsom typen av en variabel eller listan över funktioner i en fil. Genom att spåra hur dessa funktioner anropar varandra, och den data de läser, kan kompilatorer upptäcka förändringar i sina indata och utföra den minimala mängd arbete som krävs för att sammanställa dessa förändringar i utdata. Detta minskar mängden arbete som behövs för att kompilera om kod, vilket sparar både tid och energi. I denna rapport presenterar vi en ny exekveringsmodell för Query-system som möjliggör parallellism med hjälp av work-stealing. Detta underlättas av flera tillägg till befintliga algoritmer som gör det möjligt att utföra alla operationer parallellt. Utöver detta introducerar vi även en ny datastruktur som gör inkrementell kompilering snabbare för många vanliga användningsområden. Vi utvärderade effekten av dessa förändringar genom att implementera ett kompilatorgränssnitt som kan analysera och verifiera korrekthet av typer Go-programmeringsspråket. Resultaten visar en 10x reduktion i kompileringstider med hjälp av parallellkörningsläget. Vi demonstrerar även 5 gånger lägre kompileringstider vid inkrementella ändringar än vad som tidigare varit möjligt.
|
359 |
Performance-Aware Code Size Optimization of Generic Functions through Automatic Implementation of Dynamic Dispatch / Prestandamedveten kodstorleksoptimering av generiska funktioner genom automatisk tillämpning av dynamic dispatchHärnqvist, Ivar January 2022 (has links)
Monomorphization and dynamic dispatch are two common techniques for implementing polymorphism in statically typed programming languages. Function templates in C++ use the former technique to enable algorithms written as generic functions to be efficiently reused with multiple different data types by producing a separate function instantiation for each invocation that uses a unique permutation of argument types. This avoids the overhead of indirection associated with dynamic dispatch and allows the generated code of each instantiation to be optimized by the compiler for its specific concrete types, which typically yields great improvements in runtime performance over any dynamic approach. The disadvantage of this implementation, compared to the type-erased generics found in many other programming languages, is that careless over-use of templates with many different argument types can lead to an excessive amount of redundant code being generated for the same function. This increase in code size may increase the binary size of the final program and reduce the amount of useful code that can fit into the processor's instruction cache during execution, reducing code locality and thereby potentially reducing performance. Monomorphization can also increase compilation time due to the increase in generated code that needs to be compiled and optimized. This thesis presents a heuristic-based approach to generic programming that allows function templates to be automatically converted to use dynamic dispatch in scenarios where the resulting negative impact on runtime performance is predicted to be low. The thesis project includes the development of a proof of concept plugin for the Clang compiler frontend that can be used to compile existing C++ projects with the conversions applied. The design of a heuristic function for determining whether a given function template should use monomorphization or dynamic dispatch based on statically known metrics is proposed based on the results of an experiment. This heuristic is shown to achieve a small general improvement in program size across a set of open-source C++ projects when they are compiled using the plugin. The key findings from the experiment and from the development of the plugin are summarized with a general strategy for how the approach can be integrated into the design of future programming languages to promote more extensive use of generic programming in performance-sensitive code while avoiding regressions in program size and compilation time.
|
360 |
Formal verification of a synchronous data-flow compiler : from Signal to CNgô, Van Chan 01 July 2014 (has links) (PDF)
Synchronous languages such as Signal, Lustre and Esterel are dedicated to designing safety-critical systems. Their compilers are large and complicated programs that may be incorrect in some contexts, which might produce silently bad compiled code when compiling source programs. The bad compiled code can invalidate the safety properties that are guaranteed on the source programs by applying formal methods. Adopting the translation validation approach, this thesis aims at formally proving the correctness of the highly optimizing and industrial Signal compiler. The correctness proof represents both source program and compiled code in a common semantic framework, then formalizes a relation between the source program and its compiled code to express that the semantics of the source program are preserved in the compiled code.
|
Page generated in 0.0615 seconds