• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 194
  • 33
  • 31
  • 16
  • 11
  • 10
  • 6
  • 6
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 366
  • 132
  • 80
  • 72
  • 50
  • 45
  • 42
  • 40
  • 39
  • 36
  • 34
  • 34
  • 33
  • 31
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

An LLVM backend for the Open Modelica Compiler / Konstruktion av en LLVM-baserad backend för OpenModelica-kompilatorn

Tinnerholm, John January 2019 (has links)
This thesis presents the construction and evaluation of an LLVM based codegenerator, an LLVM backend. The introduction of an LLVM based backend into the OpenModelica compiler was done to examine the advantages and disadvantages of compiling Modelica and MetaModelica to LLVM IR instead of C. To answer this question, the LLVM backend was compared against the existing interpreter and C code generator using four different schemes with corresponding cases. This comparison was made both for both optimised and unoptimised code. From the experiments, it was concluded that an LLVM backend can be used to improve runtime and compile time performance in the OpenModelica Interactive environment.
82

Retargeting a C Compiler for a DSP Processor / Anpassning av en C-kompilator för kodgenerering till en DSP-processor

Antelius, Henrik January 2004 (has links)
<p>The purpose of this thesis is to retarget a C compiler for a DSP processor. </p><p>Developing a new compiler from scratch is a major task. Instead, modifying an existing compiler so that it generates code for another target is a common way to develop compilers for new processors. </p><p>This is called retargeting. This thesis describes how this was done with the LCC C compiler for the Motorola DSP56002 processor.</p>
83

A Template-Based Code Generator for the OpenModelica Compiler

Lindberg, Rickard January 2010 (has links)
<p>A new, template-based code generator has been implemented for the OpenModelica compiler. All data needed for target code generation has been collected in a new data structure that is then sent to templates which generate target code based on that data. This simplifies the implementation of the code generator and also makes it possible to write a different set of templates to generate target code in a different language.</p><p>The new, template-based code generator currently only supports generation of target code for simulating Modelica models. In that scenario it translates models roughly at the same speed as the old code generator.</p>
84

Data mining flow graphs in a dynamic compiler

Jocksch, Adam 11 1900 (has links)
This thesis introduces FlowGSP, a general-purpose sequence mining algorithm for flow graphs. FlowGSP ranks sequences according to the frequency with which they occur and according to their relative cost. This thesis also presents two parallel implementations of FlowGSP. The first implementation uses JavaTM threads and is designed for use on workstations equipped with multi-core CPUs. The second implementation is distributed in nature and intended for use on clusters. The thesis also presents results from an application of FlowGSP to mine program profiles in the context of the development of a dynamic optimizing compiler. Interpreting patterns within raw profiling data is extremely difficult and heavily reliant on human intuition. FlowGSP has been tested on performance-counter profiles collected from the IBM WebSphere Application Server. This investigation identifies a number of sequences which are known to be typical of WebSphere Application Server behavior, as well as some sequences which were previously unknown.
85

Impacts of Compiler Optimizations on Address Bus Energy: An Empirical Study

TOMIYAMA, Hiroyuki 01 October 2004 (has links)
No description available.
86

Constraint Programming Techniques for Optimal Instruction Scheduling

Malik, Abid 03 1900 (has links)
Modern processors have multiple pipelined functional units and can issue more than one instruction per clock cycle. This puts great pressure on the instruction scheduling phase in a compiler to expose maximum instruction level parallelism. Basic blocks and superblocks are commonly used regions of code in a program for instruction scheduling. Instruction scheduling coupled with register allocation is also a well studied problem to produce better machine code. Scheduling basic blocks and superblocks optimally with or with out register allocation is NP-complete, and is done sub-optimally in production compilers using heuristic approaches. In this thesis, I present a constraint programming approach to the superblock and basic block instruction scheduling problems for both idealized and realistic architectures. Basic block scheduling with register allocation with no spilling allowed is also considered. My models for both basic block and superblock scheduling are optimal and fast enough to be incorporated into production compilers. I experimentally evaluated my optimal schedulers on the SPEC 2000 integer and floating point benchmarks. On this benchmark suite, the optimal schedulers were very robust and scaled to the largest basic blocks and superblocks. Depending on the architectural model, between 99.991\% to 99.999\% of all basic blocks and superblocks were solved to optimality. The schedulers were able to routinely solve the largest blocks, including blocks with up to 2600 instructions. My results compare favorably to the best previous optimal approaches, which are based on integer programming and enumeration. My approach for basic block scheduling without allowing spilling was good enough to solve 97.496\% of all basic blocks in the SPEC 2000 benchmark. The approach was able to solve basic blocks as large as 50 instructions for both idealized and realistic architectures within reasonable time limits. Again, my results compare favorably to recent work on optimal integrated code generation, which is based on integer programming.
87

An LLVM Compiler for CAL

Ullah, Haseeb, Mofakhar, Amir January 2011 (has links)
Massively parallel architectures are gaining momentum thanks to the opportunities for both high performance and low-power consumption. After being a matter for experiments in academia, manycores are now in production and industries with products that rely on high performance, for example, in the field of telecom and radar, are in the process of adopting them. In order to encourage adoption, there is a need for compiler technologies that can make appropriate use of the opportunities of these architectures. Programming languages with constructs that the compiler can easily identify as parallel are a reasonable starting point for these technologies. For example, in CAL (Caltrop Actor Language) [1], a program is organized as a network of actors and these actors can be executed in parallel. As a first approach, if there were enough processors for all actors in the network, each actor could be mapped on to one processor. Writing a compiler is a comprehensive software engineering project and there are a number of tools, data structures and algorithms that have to be chosen and that facilitate the task. Some tools are lexer and parser generators, some data structures are abstract syntax trees and intermediate representations for code generation, some of the algorithms are for analyzing properties of the program and for translating between different data structures. LLVM (Low Level Virtual Machine) [7] is a compiler infrastructure that offers a well defined language and target independent intermediate representation for programs. LLVM also provides compile-time, link-time and run-time optimization. This report discusses the front end of a compiler for CAL, including generation of code to LLVM. The work described in this report translates CAL programs into LLVM's intermediate representation and includes a runtime system that handles intercommunication links between actors and actor scheduling. This is an intermediate step in generating LLVM code for parallel architectures. With this compiler, application developers that need to evaluate different parallel platforms will be able to have their applications written in CAL and only the back-end, form LLVM to the platform, will need to be re-programmed.
88

A Template-Based Code Generator for the OpenModelica Compiler

Lindberg, Rickard January 2010 (has links)
A new, template-based code generator has been implemented for the OpenModelica compiler. All data needed for target code generation has been collected in a new data structure that is then sent to templates which generate target code based on that data. This simplifies the implementation of the code generator and also makes it possible to write a different set of templates to generate target code in a different language. The new, template-based code generator currently only supports generation of target code for simulating Modelica models. In that scenario it translates models roughly at the same speed as the old code generator.
89

Retargeting a C Compiler for a DSP Processor / Anpassning av en C-kompilator för kodgenerering till en DSP-processor

Antelius, Henrik January 2004 (has links)
The purpose of this thesis is to retarget a C compiler for a DSP processor. Developing a new compiler from scratch is a major task. Instead, modifying an existing compiler so that it generates code for another target is a common way to develop compilers for new processors. This is called retargeting. This thesis describes how this was done with the LCC C compiler for the Motorola DSP56002 processor.
90

Constraint Programming Techniques for Optimal Instruction Scheduling

Malik, Abid 03 1900 (has links)
Modern processors have multiple pipelined functional units and can issue more than one instruction per clock cycle. This puts great pressure on the instruction scheduling phase in a compiler to expose maximum instruction level parallelism. Basic blocks and superblocks are commonly used regions of code in a program for instruction scheduling. Instruction scheduling coupled with register allocation is also a well studied problem to produce better machine code. Scheduling basic blocks and superblocks optimally with or with out register allocation is NP-complete, and is done sub-optimally in production compilers using heuristic approaches. In this thesis, I present a constraint programming approach to the superblock and basic block instruction scheduling problems for both idealized and realistic architectures. Basic block scheduling with register allocation with no spilling allowed is also considered. My models for both basic block and superblock scheduling are optimal and fast enough to be incorporated into production compilers. I experimentally evaluated my optimal schedulers on the SPEC 2000 integer and floating point benchmarks. On this benchmark suite, the optimal schedulers were very robust and scaled to the largest basic blocks and superblocks. Depending on the architectural model, between 99.991\% to 99.999\% of all basic blocks and superblocks were solved to optimality. The schedulers were able to routinely solve the largest blocks, including blocks with up to 2600 instructions. My results compare favorably to the best previous optimal approaches, which are based on integer programming and enumeration. My approach for basic block scheduling without allowing spilling was good enough to solve 97.496\% of all basic blocks in the SPEC 2000 benchmark. The approach was able to solve basic blocks as large as 50 instructions for both idealized and realistic architectures within reasonable time limits. Again, my results compare favorably to recent work on optimal integrated code generation, which is based on integer programming.

Page generated in 0.0441 seconds