161 |
Booleovské metody v kompilaci znalostí / Boolean methods in knowledge compilationKaleyski, Nikolay Stoyanov January 2016 (has links)
The open problem in knowledge compilation of whether the language PI is at least as succinct as MODS is answered in the negative. For this purpose a class of Boolean functions with a number of prime implicants that is superpolynomial in their number of false points is constructed. A lower bound (proving that PI is not at least as succinct as MODS), an upper bound (proving that the counterexample cannot yield an exponential separation of PI and MODS) and the precise number of the prime implicants of these functions is computed. Powered by TCPDF (www.tcpdf.org)
|
162 |
Efficient query processing in managed runtimesNagel, Fabian Oliver January 2015 (has links)
This thesis presents strategies to improve the query evaluation performance over huge volumes of relational-like data that is stored in the memory space of managed applications. Storing and processing application data in the memory space of managed applications is motivated by the convergence of two recent trends in data management. First, dropping DRAM prices have led to memory capacities that allow the entire working set of an application to fit into main memory and to the emergence of in-memory database systems (IMDBs). Second, language-integrated query transparently integrates query processing syntax into programming languages and, therefore, allows complex queries to be composed in the application. IMDBs typically serve as data stores to applications written in an object-oriented language running on a managed runtime. In this thesis, we propose a deeper integration of the two by storing all application data in the memory space of the application and using language-integrated query, combined with query compilation techniques, to provide fast query processing. As a starting point, we look into storing data as runtime-managed objects in collection types provided by the programming language. Queries are formulated using language-integrated query and dynamically compiled to specialized functions that produce the result of the query in a more efficient way by leveraging query compilation techniques similar to those used in modern database systems. We show that the generated query functions significantly improve query processing performance compared to the default execution model for language-integrated query. However, we also identify additional inefficiencies that can only be addressed by processing queries using low-level techniques which cannot be applied to runtime-managed objects. To address this, we introduce a staging phase in the generated code that makes query-relevant managed data accessible to low-level query code. Our experiments in .NET show an improvement in query evaluation performance of up to an order of magnitude over the default language-integrated query implementation. Motivated by additional inefficiencies caused by automatic garbage collection, we introduce a new collection type, the black-box collection. Black-box collections integrate the in-memory storage layer of a relational database system to store data and hide the internal storage layout from the application by employing existing object-relational mapping techniques (hence, the name black-box). Our experiments show that black-box collections provide better query performance than runtime-managed collections by allowing the generated query code to directly access the underlying relational in-memory data store using low-level techniques. Black-box collections also outperform a modern commercial database system. By removing huge volumes of collection data from the managed heap, black-box collections further improve the overall performance and response time of the application and improve the application’s scalability when facing huge volumes of collection data. To enable a deeper integration of the data store with the application, we introduce self-managed collections. Self-managed collections are a new type of collection for managed applications that, in contrast to black-box collections, store objects. As the data elements stored in the collection are objects, they are directly accessible from the application using references which allows for better integration of the data store with the application. Self-managed collections manually manage the memory of objects stored within them in a private heap that is excluded from garbage collection. We introduce a special collection syntax and a novel type-safe manual memory management system for this purpose. As was the case for black-box collections, self-managed collections improve query performance by utilizing a database-inspired data layout and allowing the use of low-level techniques. By also supporting references between collection objects, they outperform black-box collections.
|
163 |
Improving The Performance Of Dynamic Loadbalancing Multiscalar ArchitecturesGokulmuthu, N 09 1900 (has links) (PDF)
No description available.
|
164 |
D'une pel toute entiere sans nulle cousture. La cinquième mise en prose du Roman de Troie, édition critique et commentaire / D'une pel toute entiere sans nulle cousture. The fifth set of prose Roman de Troy, critical edition and commentaryRochebouet, Anne 28 November 2009 (has links)
Ce travail consiste en une édition critique de la cinquième mise en prose du Roman de Troie de Benoît de Sainte-Maure, dont on a conservé quinze manuscrits, et qui aurait été composée au début du XIVe siècle, peut-être à Naples. Cette mise en prose présente deux particularités par rapport aux quatre autres connues, qui constituent les deux axes d'études de l'introduction du texte. Elle ne forme pas d'une part un texte autonome, mais la section troyenne d'une compilation d'histoire antique, l'Histoire ancienne jusqu'à César dans sa deuxième rédaction, et s'inscrit donc dans la réception de ce texte ; il s'agit, d'autre part, autant que d'une mise en prose, d'une compilation, qui utilise deux des mises en prose antérieures et dont on a étudié les modes d'écriture et de réécriture. L'édition est également accompagnée d'une étude linguistique, d'un glossaire et d'un index des noms propres. / This work is a critical edition of the fifth version of the Roman de Troie by Benoît de Sainte-Maure, which can be found in fifteen manuscripts and was composed at the beginning of the fourteenth century, perhaps in Naples. This prose version has two caracteristics which make it stand apart from the other four, and which form the basis of the introductional study of the text. On the one hand, the prose version is not an autonomous text, but the Trojan section of a compilation of Antique history, the Histoire ancienne jusqu'à César in its second version, whose reception must be studied ; on the other hand, this prose version is as much a work of compilation which uses two of the previous prose version as an adptation in prose, and the ways in which the author has written and rewritten the story has been studied. The edition is followed by a linguistic study, a glossary and an index of proper names.
|
165 |
Container performance benchmark between Docker, LXD, Podman & BuildahEmilsson, Rasmus January 2020 (has links)
Virtualization is a much-used technology by small and big companies alike as running several applications on the same server is a flexible and resource-saving measure. Containers which is another way of virtualizing has become a popular choice for companies in the past years seeing even more flexibility and use cases in continuous integration and continuous development.This study aims to explore how the different leading container solutions perform in relation to one another in a test scenario that replicates a continuous integration use case which is compiling a big project from source, in this case, Firefox.The tested containers are Docker, LXD, Podman, and Buildah which will have their CPU and RAM usage tested while also looking at the time to complete the compilation. The containers perform almost on par with bare-metal except Podman/Buildah that perform worse during compilation, falling a few minutes behind.
|
166 |
Ahead of Time Compilation of EcmaScript Code Using Type Inference / Förkompilering av EcmaScript programkod baserad på typhärledningLund, Jonas January 2015 (has links)
To investigate the feasibility of improving performance for EcmaScript code in environments that restricts the usage of dynamic just in time compilers, an ahead of time EcmaScript to C compiler capable of compiling a substantial subset of the EcmaScript language has been constructed. The compiler recovers type information without customized type information by using the Cartesian Product Algorithm. While the compiler is not complete enough to be used in production it has shown to be capable of producing code that matches contemporary optimizing just in time compilers in terms of performance and substantially outperforms the interpreters currently used in restricted environments. In addition to constructing and benchmarking the compiler a survey was conducted to gauge if the selected subset of the language was acceptable for use by developers.
|
167 |
Implementing Erlang/OTP on Intel GalileoCoada, Paul, Kaya, Erkut January 2015 (has links)
The Intel Galileo, inspired by the well-known Arduino board, is a development board with many possibilities because of its strength. The Galileo is has an Intel processor capable of running GNU/Linux and can be connected to the internet, which opens up the possibility to be controlled remotely. The programming language that comes with the Intel Galileo is the same as for the Arduino development boards, and is therefore very limited and does not utilize the Galileo’s entire strength. Our aim with this project is to integrate a more suitable programming language; a language that can make better use of the relatively powerful processor to control the components of the board. The programming language of choice is Erlang, and the reason is obvious. Erlang can be described as a process-oriented programming language based on the functional programming paradigm and its power in concurrency. The result of the project was the successful integration of a complete version of GNU/Linux on the board and the cross-compilation of Erlang/OTP onto the board. Having Erlang running on the system opens up many possibilities for future work, amongst all: creating Erlang programs for the Intel Galileo, integrating an effective API, and measuring the pros and cons of using Erlang on an Intel Galileo. / Intel Galileo är ett utvecklingskort som bygger på Arduinos succe. Den kommer med en kraftigare processor jämfort med Arduino Uno, och den har möjlighet att kunna köra GNU/Linux. Den har också en port för att kunna kopplas till internet och på så sätt kommunicera med andra enheter. Programmeringsspråket som rekommenderas för Intel Galileo är densamma som används för Arduinos utvecklingskort. Det finns däremot en möjlighet att kunna kombinera utvecklingskortet med ett programmeringsspråk som kan erbjuda mer funktionalitet och fortfarande vara enkelt. Vårt val hamnade på Erlang för den är ett funktionellt språk och har möjlighet att hantera olika processer. Tanken är att kunna behandla olika komponenter kopplade till utvecklingskortet som processer, som kan kommunicera med andra komponenter och med internet. Projektarbetet bestod av att undersöka ifall det är möjligt att kunna kombinera Erlang/OTP med Intel Galileon samt skriva en guide för hur implementeringen gick till. Att kombinera de två var lyckat och det öppnar upp möjligheter för fortsätta arbeten och försök.
|
168 |
A decoupled approach to high-level loop optimization : tile shapes, polyhedral building blocks and low-level compilers / Une approche découplée pour l'optimization de boucle à haut niveauGrosser, Tobias 21 October 2014 (has links)
Malgré des décennies de recherche sur l’optimisation de boucle auxhaut niveau et leur intégration réussie dans les compilateurs C/C++et FORTRAN, la plupart des systèmes de transformation de bouclene traitent que partiellement les défis posé par la complexité croissanteet la diversité du matériel d’aujourd’hui. L’exploitation de laconnaissance dédiée a un domaine d’application pour obtenir le codeoptimal pour cibles complexes, tels que des accélérateurs ou des microprocessorsmulti-coeur, pose des problèmes pour les formalismeset outils d’optimisation de boucle existants. En conséquence, de nouveauxschémas d’optimisation qui exploitent la connaissance dédiéea un domaine sont développées indépendamment sans profiter dela technologie d’optimisation de boucle existante. Cela conduit à despossiblités d’optimisation raté et ainsi qu’à une faible portabilité deces schémas d’optimisation entre des compilateurs différents. Un domainepour lequel on voit la nécessité d’améliorer les optimisationsest le calcul de pochoir itératifs, un probléme de calcul important quiest réguliérement optimisé par les compilateurs dédiées, mais pourlequel générer code efficace est difficile.Dans ce travail, nous présentons des nouvelles stratégies pour l’optimisationdédiée qui permettent la génération de code GPU haute performancepour des calculs de pochoir. À la différence de la façon dontla plupart des compilateurs existants sont mis en oeuvre, nous découplonsla stratégie d’optimisation de haut niveau de l’optimisationde bas niveau et la spécialisation nécessaire pour obtenir la performanceoptimale. Comme schéma d’optimisation de haut niveau, nousprésentons une nouvelle formulation de “split tiling”, une techniquequi permet la réutilisation de données dans la dimension du tempsainsi que le parallélisme équilibré à gros grain sans la nécessité derecourir à des calculs redondants. Avec le “split tiling”, nous montronscomment intégrer une optimisation dédiée dans un traducteurgénérique source-à-source, C vers CUDA, une approche qui nouspermet de réutiliser des optimisations existants non-dédiées. Nousprésentons ensuite notre technique appelée “hybrid hexagonal / parallelogramtiling", un schéma qui nous permet de générer du codeque cible directement les préoccupations spécifiques aux GPUs. Pourconclure notre travail sur le "loop tiling", nous étudions la rapport entre“diamond tiling” et “hexagonal tiling”. À partir d’une analyse de“diamond tiling” détailée, qui comprend les exigences qu’elle posesur la taille de tuile et les coefficients de front d’onde, nous fournissonsune formulation unifiée de l’“hexagonal tiling” et du “diamondtiling” qui nous permet de réaliser un “hexagonal tiling” pourvdes problèmes avec deux dimensions (un temps, un espace) dans lecadre d’un usage dans un optimiseur générique, comme “Pluto”. Enfin,nous utilisons cette formulation pour évaluer l’“hexagonal tiling”et le “diamond tiling” en terme de rapport de calcul-à-communicationet calcul-à-synchronisation.Dans la deuxième partie de ce travail, nous discutons nos contributionsaux composants de l’infrastructure les plus important, nos“building blocks”, qui nous permettent de découpler notre optimisationde haut niveau tant des optimisations nécessaires dàns la générationde code que de l’infrastructure de compilation générique. Nouscommençons par présenter le nouveau “polyhedral extractor” (pet),qui obtient une représentation polyédrique d’un morceau de code C.pet utilise l’arithmétique de Presburger en sa généralité pour élargirle fragment de code C supporté et porter une attention particulièreà la modélisation de la sémantique des langages même en présencede dépassement de capacité des entiers. / Despite decades of research on high-level loop optimizations and theirsuccessful integration in production C/C++/FORTRAN com- pilers, most compilerinternal loop transformation systems only partially address the challengesposed by the increased complexity and diversity of today’s hardware. Especiallywhen exploiting domain specific knowledge to obtain optimal code for complextargets such as accelerators or many-cores processors, many existing loopoptimization frameworks have difficulties exploiting this hardware. As aresult, new domain specific optimization schemes are developed independentlywithout taking advantage of existing loop optimization technology. This resultsboth in missed optimization opportunities as well as low portability of theseoptimization schemes to different compilers. One area where we see the need forbetter optimizations are iterative stencil computations, an importantcomputational problem that is regularly optimized by specialized, domainspecific compilers, but where generating efficient code is difficult.In this work we present new domain specific optimization strategies that enablethe generation of high-performance GPU code for stencil computations. Differentto how most existing domain specific compilers are implemented, we decouple thehigh-level optimization strategy from the low-level optimization andspecialization necessary to yield optimal performance. As high-leveloptimization scheme we present a new formulation of split tiling, a tilingtechnique that ensures reuse along the time dimension as well as balancedcoarse grained parallelism without the need for redundant computations. Usingsplit tiling we show how to integrate a domain specific optimization into ageneral purpose C-to-CUDA translator, an approach that allows us to reuseexisting non-domain specific optimizations. We then evolve split tiling into ahybrid hexagonal/parallelogram tiling scheme that allows us to generate codethat even better addresses GPU specific concerns. To conclude our work ontiling schemes we investigate the relation between diamond and hexagonaltiling. Starting with a detailed analysis of diamond tiling including therequirements it poses on tile sizes and wavefront coefficients, we provide aunified formulation of hexagonal and diamond tiling which enables us to performhexagonal tiling for two dimensional problems (one time, one space) in thecontext of a general purpose optimizer such as Pluto. Finally, we use thisformulation to evaluate hexagonal and diamond tiling in terms ofcompute-to-communication and compute-to-synchronization ratios.In the second part of this work, we discuss our contributions to importantinfrastructure components, our building blocks, that enviable us to decoupleour high-level optimizations from both the necessary code generationoptimizations as well as the compiler infrastructure we apply the optimizationto. We start with presenting a new polyhedral extractor that obtains apolyhedral representation from a piece of C code, widening the supported C codeto exploit the full generality of Presburger arithmetic and taking special careof modeling language semantics even in the presence of defined integerwrapping. As a next step, we present a new polyhedral AST generation approach,which extends AST generation beyond classical control flow generation byallowing the generation of user provided mappings. Providing a fine-grainedoption mechanism, we give the user fine grained control about AST generatordecisions and add extensive support for specialization e.g., with a newgeneralized form of polyhedral unrolling. To facilitate the implementation ofpolyhedral transformations, we present a new schedule representation, scheduletrees, which proposes to make the inherent tree structure of schedules explicitto simplify the work with complex polyhedral schedules.The last part of this work takes a look at our contributions to low-levelcompilers.
|
169 |
"Ne m'intéresse que ce qui n'est pas à moi" : une approche esthétique de la reprise d'archives dans deux films d'histoire au Brésil pendant la dictature / "I am only interested in what's not mine" : an aesthetic investigation on two Brazilian compilation films / "Só me interessa o que não é meu" : um estudo da montagem de materiais de arquivo em dois filmes brasileiros do período da ditadura militarCastro, Isabel 19 May 2018 (has links)
Dans le but d'évaluer la portée historiographique de ce choix de montage qui consiste à faire œuvre cinématographique à partir d'images déjà existantes, cette thèse développe une étude de deux films de réemploi réalisés au Brésil dans le début des années 1970 : História do Brasil (Histoire du Brésil, Glauber Rocha et Marcos Medeiros, 1974) et Triste Trópico (Triste Tropique, Arthur Omar, 1974). Œuvres uniques dans la filmographie de cinéastes importants, ces films, réalisés uniquement à partir du recyclage de matériaux divers, partagent en plus de leur méthode atypique de réalisation, un intérêt central pour la compréhension de l'histoire du Brésil. Dans leur travail avec les images préexistantes, História do Brasil et Triste Trópico actualisent des questions qui traversent non seulement l'esthétique cinématographique, mais plus généralement le champ de la création culturelle brésilienne des années 60-70, période politiquement marquée au Brésil par une dictature militaire (1964-1985). À partir d'une analyse esthétique du montage, nous nous interrogeons sur la façon selon laquelle ces films s'approprient l'histoire et construisent à l'aide des procédés mêmes de montage un regard sur la société brésilienne du temps présent, celui de leur réalisation. À partir de quels matériaux et de quelles stratégies discursives développent-ils une pensée historique ? Un troisième film, postérieur, Tudo é Brasil (Rogério Sganzerla, 1998), est ponctuellement convoqué dans la première partie de la thèse, afin de montrer combien certains choix politiques et esthétiques de ces films de réemploi de 1974 signalent une position générationnelle, partagée par Sganzerla, qui se prolonge dans le temps. / This thesis develops a study of two compilation films made in Brazil in the early 1970s: História do Brasil (History of Brazil, Glauber Rocha and Marcos Medeiros, 1974) and Triste Trópico (Sad Tropic, Arthur Omar, 1974). Unique works in the filmography of important filmmakers, these films, made from the appropriation of various materials, share in addition to their atypical method of filmmaking, a central interest for the understanding of the history of Brazil. They radically exploit the power of rewriting what already exists to build a new work of historical content. In their work with pre-existing images História do Brasil and Triste Trópico address issues that concern not only the cinema, but the field of Brazilian cultural creation of the 60s and 70s, period politically marked in Brazil by the military dictatorship (1964-1985). Based mostly on an aesthetic analysis of the film's montages, we question the way in which they "write" history and offer, in their very editing processes, a perspective at Brazilian society. From what materials and discursive strategies do these films develop their historical thoughts? The thesis aims to contribute to the establishment of a range of film recycling practices and theoretical questions about the presence of archival footage in cinema, as well as about the relationship between cinema and historical narrative. / Esta tese desenvolve um estudo sobre dois filmes de reemprego brasileiros realizados no início dos anos 1970: História do Brasil (Glauber Rocha e Marcos Medeiros, 1974) e Triste Trópico (Arthur Omar, 1974), com o objetivo de avaliar o alcance historiográfico da escolha estética e política do reemprego de imagens já existentes como método de realização. Obras singulares na filmografia de artistas importantes, esses filmes, realizados fundamentalmente a partir da retomada de materiais diversos, compartilham, além de seu raro método de realização, um interesse central pela compreensão da história do Brasil. Trata-se de filmes que exploram radicalmente a potência de re-criação e re-escritura (ou releitura) do que já existe para a construção de uma obra nova, com intenções históricas. Através da montagem de materiais do passado, História do Brasil e Triste Trópico atualizam questões que atravessam não somente o cinema, mas o campo da criação cultural brasileira dos anos 60-70, período marcado politicamente pela vigência da ditadura militar no Brasil (1964-1985). A partir de uma análise estética, o objetivo desta tese é pensar como os filmes elaboram suas narrativas de caráter histórico e constroem, através dos próprios procedimentos da montagem, um olhar sobre a sociedade brasileira do tempo presente de então, o início dos anos 1970. A partir de quais materiais e estratégias discursivas eles elaboram um pensamento sobre o Brasil e a história? Um terceiro filme, posterior, Tudo é Brasil (Rogério Sganzerla, 1998), é pontualmente convocado na primeira parte da tese, a fim de mostrar o quanto determinadas escolhas políticas e estéticas destes filmes de reemprego de 1974 apontam para uma postura geracional, compartilhada por Sganzerla, que se prolonga no tempo.
|
170 |
Generický zpětný překlad za účelem rozpoznání chování / Generic Reverse Compilation to Recognize Specific BehaviorĎurfina, Lukáš January 2014 (has links)
Práce je zaměřena na rozpoznávání specifického chování pomocí generického zpětného překladu. Generický zpětný překlad je proces, který transformuje spustitelné soubory z různých architektur a formátů objektových souborů na stejný jazyk na vysoké úrovni. Tento proces se vztahuje k nástroji Lissom Decompiler. Pro účely rozpoznání chování práce zavádí Language for Decompilation -- LfD. LfD představuje jednoduchý imperativní jazyk, který je vhodný pro srovnávaní. Konkrétní chování je dáno známým spustitelným souborem (např. malware) a rozpoznání se provádí jako najítí poměru podobnosti s jiným neznámým spustitelným souborem. Tento poměr podobnosti je vypočítán nástrojem LfDComparator, který zpracovává dva vstupy v LfD a rozhoduje o jejich podobnosti.
|
Page generated in 0.1193 seconds