• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 19
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 63
  • 19
  • 18
  • 13
  • 12
  • 12
  • 10
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

An automated OpenCL FPGA compilation framework targeting a configurable, VLIW chip multiprocessor

Parker, Samuel J. January 2015 (has links)
Modern system-on-chips augment their baseline CPU with coprocessors and accelerators to increase overall computational capacity and power efficiency, and thus have evolved into heterogeneous systems. Several languages have been developed to enable this paradigm shift, including CUDA and OpenCL. This thesis discusses a unified compilation environment to enable heterogeneous system design through the use of OpenCL and a customised VLIW chip multiprocessor (CMP) architecture, known as the LE1. An LLVM compilation framework was researched and a prototype developed to enable the execution of OpenCL applications on the LE1 CPU. The framework fully automates the compilation flow and supports work-item coalescing to better utilise the CPU cores and alleviate the effects of thread divergence. This thesis discusses in detail both the software stack and target hardware architecture and evaluates the scalability of the proposed framework on a highly precise cycle-accurate simulator. This is achieved through the execution of 12 benchmarks across 240 different machine configurations, as well as further results utilising an incomplete development branch of the compiler. It is shown that the problems generally scale well with the LE1 architecture, up to eight cores, when the memory system becomes a serious bottleneck. Results demonstrate superlinear performance on certain benchmarks (x9 for the bitonic sort benchmark with 8 dual-issue cores) with further improvements from compiler optimisations (x14 for bitonic with the same configuration).
12

An LLVM backend for the Open Modelica Compiler / Konstruktion av en LLVM-baserad backend för OpenModelica-kompilatorn

Tinnerholm, John January 2019 (has links)
This thesis presents the construction and evaluation of an LLVM based codegenerator, an LLVM backend. The introduction of an LLVM based backend into the OpenModelica compiler was done to examine the advantages and disadvantages of compiling Modelica and MetaModelica to LLVM IR instead of C. To answer this question, the LLVM backend was compared against the existing interpreter and C code generator using four different schemes with corresponding cases. This comparison was made both for both optimised and unoptimised code. From the experiments, it was concluded that an LLVM backend can be used to improve runtime and compile time performance in the OpenModelica Interactive environment.
13

An LLVM Compiler for CAL

Ullah, Haseeb, Mofakhar, Amir January 2011 (has links)
Massively parallel architectures are gaining momentum thanks to the opportunities for both high performance and low-power consumption. After being a matter for experiments in academia, manycores are now in production and industries with products that rely on high performance, for example, in the field of telecom and radar, are in the process of adopting them. In order to encourage adoption, there is a need for compiler technologies that can make appropriate use of the opportunities of these architectures. Programming languages with constructs that the compiler can easily identify as parallel are a reasonable starting point for these technologies. For example, in CAL (Caltrop Actor Language) [1], a program is organized as a network of actors and these actors can be executed in parallel. As a first approach, if there were enough processors for all actors in the network, each actor could be mapped on to one processor. Writing a compiler is a comprehensive software engineering project and there are a number of tools, data structures and algorithms that have to be chosen and that facilitate the task. Some tools are lexer and parser generators, some data structures are abstract syntax trees and intermediate representations for code generation, some of the algorithms are for analyzing properties of the program and for translating between different data structures. LLVM (Low Level Virtual Machine) [7] is a compiler infrastructure that offers a well defined language and target independent intermediate representation for programs. LLVM also provides compile-time, link-time and run-time optimization. This report discusses the front end of a compiler for CAL, including generation of code to LLVM. The work described in this report translates CAL programs into LLVM's intermediate representation and includes a runtime system that handles intercommunication links between actors and actor scheduling. This is an intermediate step in generating LLVM code for parallel architectures. With this compiler, application developers that need to evaluate different parallel platforms will be able to have their applications written in CAL and only the back-end, form LLVM to the platform, will need to be re-programmed.
14

Software Based GPU Framework

Miretsky, Evgeny 05 December 2013 (has links)
A software based GPU design, where most of the 3D pipeline is executed in software on shaders, with minimal support from custom hardware blocks, provides three benefits, it: (1) simplifies the GPU design, (2) turns 3D graphics into a general purpose application, and (3) opens the door for applying compiler optimization to the whole 3D pipeline. In this thesis we design a framework and a full software stack to support further research in the field. LLVM IR is used as a flexible shader IR, and all fixed-function hardware blocks are translated into it. A sort-middle, tile-based, architecture is used for the 3D pipeline and trace-file based methodology is applied to make the system more modular. Further, we implement a GPU model and use it to perform an architectural exploration of the proposed software based GPU system design space.
15

Software Based GPU Framework

Miretsky, Evgeny 05 December 2013 (has links)
A software based GPU design, where most of the 3D pipeline is executed in software on shaders, with minimal support from custom hardware blocks, provides three benefits, it: (1) simplifies the GPU design, (2) turns 3D graphics into a general purpose application, and (3) opens the door for applying compiler optimization to the whole 3D pipeline. In this thesis we design a framework and a full software stack to support further research in the field. LLVM IR is used as a flexible shader IR, and all fixed-function hardware blocks are translated into it. A sort-middle, tile-based, architecture is used for the 3D pipeline and trace-file based methodology is applied to make the system more modular. Further, we implement a GPU model and use it to perform an architectural exploration of the proposed software based GPU system design space.
16

An LLVM Back-end for REPLICA : Code Generation for a Multi-core VLIWProcessor with Chaining / Ett LLVM Back-end för REPLICA : Kodgenerering för en Flerkärning VLIW Processor med Kedjade Instruktioner

Åkesson, Daniel January 2012 (has links)
REPLICA is a PRAM-NUMA hybrid architecture, with support for instructionlevel parallelism as a VLIW architecture. REPLICA can also chain instructionsso that the output from an earlier instruction can be used as input to a laterinstruction in the same execution step. There are plans in the REPLICA project to develop a new C-based program-ming language, compilers and libraries to speed up development of parallel pro-grams. We have developed a LLVM back-end as a part of the REPLICA projectthat can be used to generate code for the REPLICA architecture. We have alsocreated a simple optimization algorithm to make better use of REPLICAs supportfor instruction level parallelism. Some changes to Clang, LLVMs front-end forC/C++/Objective-C, was also necessary so that we could use assembler in-liningin our REPLICA programs. Using Clang to compile C-code to LLVMs internal representation and LLVMwith our REPLICA back-end to transform LLVMs internal representation intoMBTAC1 assembler. / REPLICA är en VLIW liknande PRAM-NUMA arkitektur, med möjlighet för attkedja ihop instruktioner så att resultat från tidigare instruktioner kan användassom indata till nästa instruktion i samma exekveringssteg. Inom REPLICA projetet finns planer på att utecklar ett nytt C-baserat pro-grammeringsspråk, kompilatorer och bibliotek för att snabbba upp utvecklingen avparallella program. Som en del av REPLICA projektet har vi utvecklat ett kompi-lator back-end för LLVM som kan användas för att generera kod till REPLICA. Vihar även utvecklat en enklare optimerings algoritm för att bättre utnyttja REPLI-CAs förmåga för instruktions parallelisering. Vi har även gjort ändringar i Clang,LLVMs front-end för C/C++/Objective-C, så att vi kan använda inline assembleri REPLICA program. Med Clang kan man kompilera C-kod till LLVMs interna representation somi sin tur genom LLVM och REPLICA back-end kan omvandlas till MBTAC3 as-sembler.
17

An Empirical Study of Alias Analysis Techniques

Tran, Andrew T 01 June 2018 (has links)
As software projects become larger and more complex, software optimization at that scale is only feasible through automated means. One such component of software optimization is alias analysis, which attempts to determine which variables in a program refer to the same area in memory, and is used to relocate instructions to improve performance without interfering with program execution. Several alias analyses have been proposed over the past few decades, with varying degrees of precision and time and space complexity, but few studies have been conducted to compare these techniques with one another, nor to measure with program data to confirm their accuracy. Normally, this is out of the scope of alias analyses because these processes are static, and can only rely upon the input source code. We address these limitations by instrumenting several benchmarks and combining their data with commonly used alias analyses to objectively measure the accuracy of those analyses. Additionally, we also gather additional program statistics to further determine which programs are the most suitable for evaluating subsequent alias analysis techniques.
18

Practical Type and Memory Safety Violation Detection Mechanisms

Yuseok Jeon (9217391) 29 August 2020 (has links)
System programming languages such as C and C++ are designed to give the programmer full control over the underlying hardware. However, this freedom comes at the cost of type and memory safety violations which may allow an attacker to compromise applications. In particular, type safety violation, also known as type confusion, is one of the major attack vectors to corrupt modern C++ applications. In the past years, several type confusion detectors have been proposed, but they are severely limited by high performance overhead, low detection coverage, and high false positive rates. To address these issues, we propose HexType and V-Type. First, we propose HexType, a tool that provides low-overhead disjoint metadata structures, compiler optimizations, and handles specific object allocation patterns. Thus, compared to prior work, HexType significantly improves detection coverage and reduces performance overhead. In addition, HexType discovers new type confusion bugs in real world programs such as Qt and Apache Xerces-C++. However, HexType still has considerable overhead from managing the disjoint metadata structure and tracking individual objects, and has false positives from imprecise object tracking, although HexType significantly reduces performance overhead and detection coverage. To address these issues, we propose a further advanced mechanism V-Type, which forcibly changes non-polymorphic types into polymorphic types to make sure all objects maintain type information. By doing this, V-Type removes the burden of tracking object allocation and deallocation and of managing a disjoint metadata structure, which reduces performance overhead and improves detection precision. Another major attack vector is memory safety violations, which attackers can take advantage of by accessing out of bound or deleted memory. For memory safety violation detection, combining a fuzzer with sanitizers is a popular and effective approach. However, we find that heavy metadata structure of current sanitizers hinders fuzzing effectiveness. Thus, we introduce FuZZan to optimize sanitizer metadata structures for fuzzing. Consequently, FuZZan improves fuzzing throughput, and this helps the tester to discover more unique paths given the same amount of time and to find bugs faster. In conclusion, my research aims to eliminate critical and common C/C++ memory and type safety violations through practical programming analysis techniques. For this goal, through these three projects, I contribute to our community to effectively detect type and memory safety violations.
19

Calculation of WCET with symbolic execution

Österberg, Carl January 2022 (has links)
Calculating WCET for schedulability analysis of RTIC applications is today performed with a hybrid approach with both static analysis of code and hardware measurements. A fully static analysis tool would allow for a easier integration into a CI/CD pipeline without the actual hardware. This thesis attempts to compute WCET statically, using symbolic execution engine KLEE to generate all the possible paths of execution for a task and then analyses these paths to approximate the worst-case for each path which would yield a approximate WCET for the analysed program. To analyze a path in a program the low-level intermediary assembly language used by the LLVM optimization infrastructure (called LLVM IR) is compared to the finished assembly language to draw conclusions on how an LLVM IR instruction is processed into assembly. To be able to perform this mapping from LLVM IR to assembly, the symbolic execution engine KLEE has been extended to also log each LLVM IR instruction run in a path. These logs combined with a translation table is how the approximations are calculated. The resulting approximations correlate with the actual cycles when the analysed program is run on actual hardware, which indicates that tool could actually be used to approximate WCET. There are however no guarantees and the tool has not been tested for larger scale programs.
20

Extraction and traceability of annotations for WCET estimation / Extraction et traçabilité d’annotations pour l’estimation de WCET

Li, Hanbing 09 October 2015 (has links)
Les systèmes temps-réel devenaient omniprésents, et jouent un rôle important dans notre vie quotidienne. Pour les systèmes temps-réel dur, calculer des résultats corrects n’est pas la seule exigence, il doivent de surcroît être produits dans un intervalle de temps borné. Connaître le pire cas de temps d’exécution (WCET - Worst Case Execution Time) est nécessaire, et garantit que le système répond à ses contraintes de temps. Pour obtenir des estimations de WCET précises, des annotations sont nécessaires. Ces annotations sont généralement ajoutées au niveau du code source, tandis que l’analyse de WCET est effectuée au niveau du code binaire. L’optimisation du compilateur est entre ces deux niveaux et a un effet sur la structure du code et annotations. Nous proposons dans cette thèse une infrastructure logicielle de transformation, qui pour chaque optimisation transforme les annotations du code source au code binaire. Cette infrastructure est capable de transformer les annotations sans perte d’information de flot. Nous avons choisi LLVM comme compilateur pour mettre en œuvre notre infrastructure. Et nous avons utilisé les jeux de test Mälardalen, TSVC et gcc-loop pour démontrer l’impact de notre infrastructure sur les optimisations du compilateur et la transformation d’annotations. Les résultats expérimentaux montrent que de nombreuses optimisations peuvent être activées avec notre système. Le nouveau WCET estimé est meilleur (plus faible) que l’original. Nous montrons également que les optimisations du compilateur sont bénéfiques pour les systèmes temps-réel. / Real-time systems have become ubiquitous, and play an important role in our everyday life. For hard real-time systems, computing correct results is not the only requirement. In addition, the worst-case execution times (WCET) are needed, and guarantee that they meet the required timing constraints. For tight WCET estimation, annotations are required. Annotations are usually added at source code level but WCET analysis is performed at binary code level. Compiler optimization is between these two levels and has an effect on the structure of the code and annotations.We propose a transformation framework for each optimization to trace the annotation information from source code level to binary code level. The framework can transform the annotations without loss of flow information. We choose LLVM as the compiler to implement our framework. And we use the Mälardalen, TSVC and gcc-loops benchmarks to demonstrate the impact of our framework on compiler optimizations and annotation transformation. The experimental results show that with our framework, many optimizations can be turned on, and we can still estimate WCET safely. The estimated WCET is better than the original one. We also show that compiler optimizations are beneficial for real-time systems.

Page generated in 0.0256 seconds