• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 119
  • 37
  • 28
  • 7
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 295
  • 179
  • 122
  • 103
  • 100
  • 68
  • 47
  • 42
  • 41
  • 40
  • 40
  • 37
  • 36
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Jazyky a překladače v počítačových hrách / Languages and Compilers in Computer Games

Hranáč, Jan January 2010 (has links)
This project deals with usage of programming languages and compilers in computer games. Purpose of this project is to give young people motivation for programming, which isn't always done by usual work in Pascal at high school. This idea was applied on a developed computer game with has a few integrated languages which can be used for creation of the game's content (adventures, campaigns and objects in them). This computer game was based on a RPG desk game Dragon's Lair (a Czech equivalent of D&D) from ALTAR(TM) publishing company.
182

Compiler of a Language with User-Defined Syntax for New Constructs / Compiler of a Language with User-Defined Syntax for New Constructs

Kuklínek, Lukáš January 2013 (has links)
Tato práce si klade za cíl navrhnout a implementovat experimentální programovací jazyk s podporou uživatelsky definovaných syntaktických konstrukcí. Nový jazyk je kompilován do nativní binární podoby a vyžaduje statickou typovou disciplínu v době překladu. Jazyk se skládá ze dvou hlavních komponent. První z nich je minimalistické jádro založené na principech zásobníkově orientovaných jazyků. Druhou částí je mechanismus pro definici nových syntaktických konstrukcí uživatelem. Poté jsou shrnuty poznatky nabyté při návrhu a experimentování s prototypem překladače tohoto jazyka.
183

Ahead of Time Compilation of EcmaScript Code Using Type Inference / Förkompilering av EcmaScript programkod baserad på typhärledning

Lund, Jonas January 2015 (has links)
To investigate the feasibility of improving performance for EcmaScript code in environments that restricts the usage of dynamic just in time compilers, an ahead of time EcmaScript to C compiler capable of compiling a substantial subset of the EcmaScript language has been constructed. The compiler recovers type information without customized type information by using the Cartesian Product Algorithm. While the compiler is not complete enough to be used in production it has shown to be capable of producing code that matches contemporary optimizing just in time compilers in terms of performance and substantially outperforms the interpreters currently used in restricted environments. In addition to constructing and benchmarking the compiler a survey was conducted to gauge if the selected subset of the language was acceptable for use by developers.
184

Compiler-Based Tools to Aid in Data Transfer Optimization and On-Chip Debug of Heterogeneous Compute Systems

Ashcraft, Matthew B. 07 July 2020 (has links)
First, we present techniques to efficiently schedule data transfers through compiler analyses. Compared to transferring data immediately before and after the kernel executes, our scheduling results in orders of magnitude improvements in execution time, number of data transfers, and number of bytes transferred. Second, we demonstrate techniques to provide on-chip debugging for heterogeneous systems through recording execution on the software in addition to debugging circuitry in the hardware, and provide a temporal correlation between the hardware and software traces through synchronization. This allows us to follow debug data between the hardware and software trace buffers. Due to the added cost of synchronizing the trace buffers, we explore synchronization schemes which can reduce the impact synchronization depending on the code structure. We demonstrate the quantitative impact of these techniques on execution time and hardware and software resources, which are under a 2x increase to execution time in most cases. Third, we demonstrate how source-code debugging techniques for on-chip debugging can be applied to OpenCL FPGA kernels in heterogeneous systems. We developed techniques and a tool-flow that allows users to select variables to record, automatically insert recording instructions into the kernel source code, synthesize the changes directly into the hardware design using commercial HLS tools, retrieve the trace data through kernel arguments, and present it to the user. Overall, quantitative measurements showed our techniques resulted in modest increases to execution time and hardware resources.
185

Verification formelle et optimisation de l’allocation de registres / Formal Verification and Optimization of Register Allocation

Robillard, Benoît 30 November 2010 (has links)
La prise de conscience générale de l'importance de vérifier plus scrupuleusement les programmes a engendré une croissance considérable des efforts de vérification formelle de programme durant cette dernière décennie. Néanmoins, le code qu'exécute l'ordinateur, ou code exécutable, n'est pas le code écrit par le développeur, ou code source. La vérification formelle de compilateurs est donc un complément indispensable à la vérification de code source.L'une des tâches les plus complexes de compilation est l'allocation de registres. C'est lors de celle-ci que le compilateur décide de la façon dont les variables du programme sont stockées en mémoire durant son exécution. La mémoire comporte deux types de conteneurs : les registres, zones d'accès rapide, présents en nombre limité, et la pile, de capacité supposée suffisamment importante pour héberger toutes les variables d'un programme, mais à laquelle l'accès est bien plus lent. Le but de l'allocation de registres est de tirer au mieux parti de la rapidité des registres, car une allocation de registres de bonne qualité peut conduire à une amélioration significative du temps d'exécution du programme.Le modèle le plus connu de l'allocation de registres repose sur la coloration de graphe d'interférence-affinité. Dans cette thèse, l'objectif est double : d'une part vérifier formellement des algorithmes connus d'allocation de registres par coloration de graphe, et d'autre part définir de nouveaux algorithmes optimisants pour cette étape de compilation. Nous montrons tout d'abord que l'assistant à la preuve Coq est adéquat à la formalisation d'algorithmes d'allocation de registres par coloration de graphes. Nous procédons ainsi à la vérification formelle en Coq d'un des algorithmes les plus classiques d'allocation de registres par coloration de graphes, l'Iterated Register Coalescing (IRC), et d'une généralisation de celui-ci permettant à un utilisateur peu familier du système Coq d'implanter facilement sa propre variante de cet algorithme au seul prix d'une éventuelle perte d'efficacité algorithmique. Ces formalisations nécessitent des réflexions autour de la formalisation des graphes d'interférence-affinité, de la traduction sous forme purement fonctionnelle d'algorithmes impératifs et de l'efficacité algorithmique, la terminaison et la correction de cette version fonctionnelle. Notre implantation formellement vérifiée de l'IRC a été intégrée à un prototype du compilateur CompCert.Nous avons ensuite étudié deux représentations intermédiaires de programmes, dont la forme SSA, et exploité leurs propriétés pour proposer de nouvelles approches de résolution optimale de la fusion, l'une des optimisations opéréeslors de l'allocation de registres dont l'impact est le plus fort sur la qualité du code compilé. Ces approches montrent que des critères de fusion tenant compte de paramètres globaux du graphe d'interférence-affinité, tels que sa largeur d'arbre, ouvrent la voie vers de nouvelles méthodes de résolution potentiellement plus performantes. / The need for trustful programs led to an increasing use of formal verication techniques the last decade, and especially of program proof. However, the code running on the computer is not the source code, i.e. the one written by the developper, since it has to betranslated by the compiler. As a result, the formal verication of compilers is required to complete the source code verication. One of the hardest phases of compilation is register allocation. Register allocation is the phase within which the compiler decides where the variables of the program are stored in the memory during its execution. The are two kinds of memory locations : a limited number of fast-access zones, called registers, and a very large but slow-access stack. The aim of register allocation is then to make a great use of registers, leading to a faster runnable code.The most used model for register allocation is the interference graph coloring one. In this thesis, our objective is twofold : first, formally verifying some well-known interference graph coloring algorithms for register allocation and, second, designing new graph-coloring register allocation algorithms. More precisely, we provide a fully formally veri ed implementation of the Iterated Register Coalescing, a very classical graph-coloring register allocation heuristics, that has been integrated into the CompCert compiler. We also studied two intermediate representations of programs used in compilers, and in particular the SSA form to design new algorithms, using global properties of the graph rather than local criteria currently used in the litterature.
186

The Nax Language: Unifying Functional Programming and Logical Reasoning in a Language based on Mendler-style Recursion Schemes and Term-indexed Types

Ahn, Ki Yung 16 December 2014 (has links)
Two major applications of lambda calculi in computer science are functional programming languages and mechanized reasoning systems (or, proof assistants). According to the Curry--Howard correspondence, it is possible, in principle, to design a unified language based on a typed lambda calculus for both logical reasoning and programming. However, the different requirements of programming languages and reasoning systems make it difficult to design such a unified language that provides both. Programming languages usually extend lambda calculi with programming-friendly features (e.g., recursive datatypes, general recursion) for supporting the flexibility to model various computations, while sacrificing logical consistency. Logical reasoning systems usually extend lambda calculi with logic-friendly features (e.g., induction principles, dependent types) for paradox-free inference over fine-grained properties, while being more restrictive in modeling computations. In this dissertation, we design and implement a language called Nax that embraces benefits of both. Nax accepts all recursive datatypes, thus, allowing the same flexibility of defining recursive datatypes as in functional languages. Nax supports a number of Mendler-style recursion schemes that can express various kinds of recursive computations and also guarantee termination. Nax supports term-indexed types to support specifications of fine-grained properties. In addition, Nax supports a conservative extension of Hindley--Milner type inference. The theoretical contributions of this dissertation include theories for Mendler-style recursion schemes and term-indexed types, which we developed to establish strong normalization and logical consistency of Nax.
187

Type Classes and Instance Chains: A Relational Approach

Morris, John Garrett 04 June 2013 (has links)
Type classes, first proposed during the design of the Haskell programming language, extend standard type systems to support overloaded functions. Since their introduction, type classes have been used to address a range of problems, from typing ordering and arithmetic operators to describing heterogeneous lists and limited subtyping. However, while type class programming is useful for a variety of practical problems, its wider use is limited by the inexpressiveness and hidden complexity of current mechanisms. We propose two improvements to existing class systems. First, we introduce several novel language features, instance chains and explicit failure, that increase the expressiveness of type classes while providing more direct expression of current idioms. To validate these features, we have built an implementation of these features, demonstrating their use in a practical setting and their integration with type reconstruction for a Hindley-Milner type system. Second, we define a set-based semantics for type classes that provides a sound basis for reasoning about type class systems, their implementations, and the meanings of programs that use them.
188

Compiler Optimization Effects on Register Collisions

Tan, Jonathan S 01 June 2018 (has links) (PDF)
We often want a compiler to generate executable code that runs as fast as possible. One consideration toward this goal is to keep values in fast registers to limit the number of slower memory accesses that occur. When there are not enough physical registers available for use, values are ``spilled'' to the runtime stack. The need for spills is discovered during register allocation wherein values in use are mapped to physical registers. One factor in the efficacy of register allocation is the number of values in use at one time (register collisions). Register collision is affected by compiler optimizations that take place before register allocation. Though the main purpose of compiler optimizations is to make the overall code better and faster, some optimizations can actually increase register collisions. This may force the register allocation process to spill. This thesis studies the effects of different compiler optimizations on register collisions.
189

[en] OPTIMIZING THE PALLENE COMPILER / [pt] OTIMIZANDO O COMPILADOR PALLENE

LEONARDO KRAUSE LIPET SLIPOI KAPLAN 22 June 2021 (has links)
[pt] Linguagens dinâmicas provêm flexibilidade e simplicidade em troca de menos informação em tempo de compilação, o que resulta em perda de desempenho. Atacando este problema no contexto de Lua, a linguagem de programação Pallene surge como uma alternativa. Neste trabalho, examinamos o atual estado de Pallene, procurando por padrões responsáveis por perdas de desempenho. Baseado nestes padrões, propusemos e implementamos uma série de otimizações usando técnicas de análise estática. / [en] Dynamic languages provide flexibility and simplicity in exchange for less compile-time information, leading to slower run times. Addressing this problem in the Lua context, the Pallene programming language appears as an alternative. In this work, we studied the current state of Pallene, searching for patterns that caused performance losses. Based on these patterns, we proposed and implemented several optimizations with the use of static analysis techniques.
190

[en] A FOREIGN FUNCTION INTERFACE FOR PALLENE / [pt] UMA INTERFACE DE FUNÇÕES EXTERNAS PARA PALLENE

GABRIEL COUTINHO DE PAULA 10 December 2021 (has links)
[pt] Pallene é um subconjunto estaticamente tipado da linguagem de programação Lua, projetada para atuar como uma linguagem de sistemas em contraparte ao scripting de Lua, e usada para escrever bibliotecas de baixo nível e módulos de extensão para Lua. Nesse sentido, Pallene é uma linguagem companheira, usada sempre lado a lado com Lua, compartilhando seu runtime. Pallene, tanto como uma linguagem de sistema em contraparte a Lua e como uma ponte entre Lua e outras linguagens, deve fornecer um mecanismo de interação com bibliotecas e serviços de sistema escritos em uma linguagem de baixo nível como C. Para este fim, apresentamos um design e implementação de Interface de Função Estrangeira (IFE) para Pallene, que permite chamar funções externas que seguem as convenções de chamada C, e manipular representações de dados C diretamente. Nosso design equilibra flexibilidade e segurança seguindo uma abordagem empírica, em que preferimos sacrificar flexibilidade hipotética a menos que vejamos um caso de uso que nos faça comprometer em segurança. Nossa implementação tem como objetivo ser tão portátil quanto o Pallene, além de ter um bom desempenho e ser simples. Uma vez que a implementação de Pallene já depende de um compilador de C, a nossa IFE usa agressivamente o compilador de C já disponível para sua vantagem. Dessa forma, podemos usar diretamente os arquivos de cabeçalho de bibliotecas externas, permitindo que a IFE verifique a exatidão das ligações de função externa e que use macros de C. / [en] Pallene is a statically typed subset of the Lua programming language, designed to act as a system-language counterpart to Lua’s scripting, and used to write lower-level libraries and extension modules for Lua. In this sense, Pallene is a companion language, always intended to be used side-by-side with Lua, sharing its runtime. Pallene, both as a system-language counterpart to Lua and as a bridge between Lua and foreign languages, must provide a mechanism to interact with libraries and system services written in a low-level language like C. To this end, we present a Foreign Function Interface (FFI) design and implementation for Pallene, which allows for calling external functions that follow C calling conventions and manipulating C data representations directly. Our design balances flexibility and safety following an empirical approach, wherein we prefer to sacrifice hypothetical flexibility unless we see a real use case that forces us to compromise on safety. Our implementation aims to be as portable as Pallene, as well as performant and simple. Since Pallene s implementation already depends on a C compiler, the FFI aggressively uses the already available C compiler to its advantage. This way, we can directly use foreign libraries header files, enabling the FFI to verify the correctness of function bindings and to use macro definitions.

Page generated in 0.0432 seconds