• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 194
  • 35
  • 31
  • 16
  • 11
  • 10
  • 6
  • 6
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 369
  • 135
  • 81
  • 73
  • 51
  • 46
  • 42
  • 40
  • 39
  • 36
  • 34
  • 34
  • 34
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Automatic Tuning of Scientific Applications

Qasem, Apan January 2007 (has links)
Over the last several decades we have witnessed tremendous change in the landscape of computer architecture. New architectures have emerged at a rapid pace with computing capabilities that have often exceeded our expectations. However, the rapid rate of architectural innovations has also been a source of major concern for the high-performance computing community. Each new architecture or even a new model of a given architecture has brought with it new features that have added to the complexity of the target platform. As a result, it has become increasingly difficult to exploit the full potential of modern architectures for complex scientific applications. The gap between the theoretical peak and the actual achievable performance has increased with every step of architectural innovation. As multi-core platforms become more pervasive, this performance gap is likely to increase. To deal with the changing nature of computer architecture and its ever increasing complexity, application developers laboriously retarget code, by hand, which often costs many person-months even for a single application. To address this problem, we developed a software-based strategy that can automatically tune applications to different architectures to deliver portable high-performance. This dissertation describes our automatic tuning strategy. Our strategy combines architecture-aware cost models with heuristic search to find the most suitable optimization parameters for the target platform. The key contribution of this work is a novel strategy for pruning the search space of transformation parameters. By focusing on architecture-dependent model parameters instead of transformation parameters themselves, we show that we can dramatically reduce the size of the search space and yet still achieve most of the benefits of the best tuning possible with exhaustive search. We present an evaluation of our strategy on a set of scientific applications and kernels on several different platforms. The experimental results presented in this dissertation suggest that our approach can produce significant performance improvement on a range of architectures at a cost that is not overly demanding.
102

Kompiliatorių internacionalizacija / Internationalization of Compilers

Laucius, Rimgaudas 04 December 2007 (has links)
Programinės įrangos (PĮ) internacionalizavimas yra gamintojo prerogatyva ir tai yra PĮ gamybos proceso dalis. Todėl didelę įtaką jam turi gamybai naudojamų priemonių internacionalizacijos lygis. Jei priemonės nėra pakankamai internacionalizuotos, tuomet šis procesas yra neįmanomas arba reikalauja didesnių papildomų investicijų. Pavyzdžiui, akivaizdu, kad programuotojas susidurs su sunkumais kurdamas internacionalizuotą PĮ, jei programavimo priemonės neleidžia pirminiame tekste naudoti daugiakalbio teksto. Ankstesni PĮ lokalizavimo darbai [DL03] [La03] [DL04] ir Free Pascal kompiliatoriaus pritaikymo Lietuvos mokykloms [DL01] [La01] srityse atskleidė, kad lokalizuojant PĮ vis dar išskyla daugybė problemų, kurių priežastimi yra nepakankamas jos internacionalizacijos lygis. Daugelis autorių linkę šių priežasčių ieškoti PĮ gamybos (internacionalizavimo) procese [Yo01] [Ye03] [Su01]. Tačiau pagrindinės priežastys kyla iš giliau t. y. iš nepakankamo PĮ gamybos priemonių internacionalizacijos lygio. Darbe pateiktas kompiliatorių internacionalizuotumo lygio įvertinimo metodas leis įvertinti kompiliatorių internacionalizuotumo lygį. Tai leis įvertinti kompiliatorių galimybes kurti internacionalizuotą PĮ, juos palyginti internacionalizuotumo aspektu. / The experience gained when participating in the projects of “OpenOffice.org”, “Mozilla”, “AbiWord” and other software localization has revealed that even the software developed for international markets is often insufficiently internationalized. Because of that its localization is more difficult and followed by various problems. By investigating the origin of a low software internationalization level and looking for the solution of this problem, some hypotheses have been made and tested. Tasks of the work 1. To analyse scientific and methodical literature, related with software internationalization and discuss the theoretical aspects. 2. To analyse and compare the most frequently used compilers in terms of internationalization. 3. Experimentally internationalize the chosen compiler. After corroboration of the hypotheses, additional objectives have been made: 4. To analyze aspects of internationalization of compilers and systemize them. 5. To prepare the method of internationalization of compilers.
103

Darbui su duomenų bazėmis skirtos programavimo kalbos kompiliatorius .NET platformai / Database oriented programming language compiler for .NET framework

Bieliūnas, Rytis 11 August 2009 (has links)
Šiuo metu didžioji dalis praktiškai naudojamų taikomųjų programų daugiau ar mažiau dirba su duomenų bazėmis. Tai ypatingai aktualu kuriant buhalterines, verslo valdymo ir panašias programų sistemas, nes jos plačiai išnaudoja duomenų bazių sistemų galimybes. Darbe nagrinėjama tokioms programoms kurti skirta Microsoft Navision programavimo kalba (C/AL). Išanalizuoti jos privalumai ir trūkumai. Siūlomas būdas kaip panaudojant C/AL kalbą būtų galima patobulinti .NET platformos priemones skirtas darbui su duomenų bazėmis. Pasiūlyta nauja kalba C/AL .NET. Atlikti šios kalbos kompiliatoriaus ir pagalbinių bibliotekų prototipo projektavimo ir programavimo darbai. Parašyta eksperimentinė sistema naująja kalba. Parodyta, kad programas parašytas naująja kalba galima integruoti su kitomis .NET kalbomis ir panaudojant šią kalbą sėkmingai spręsti duomenų bazių programavimo problemas. / Most of the applied software systems used today work with databases in one way or another. It is especially important while developing accounting, business management and similar software systems, because they make an extensive use of database management systems. This work examines Navision programming language (C/AL) that is used for the development of such type of software. Work analyzes the advantages and disadvantages of C/AL and proposes database management and usage related improvements and tools for .NET framework by creating a new programming language – C/AL .NET. Compiler and runtime library prototypes for such language were designed and implemented. Using the new language an experimental system was created. It was shown that programs written in the new language can be integrated with other .NET languages and successfully used to solve certain database programming problems.
104

Lifting the Abstraction Level of Compiler Transformations

Tang, Xiaolong 16 December 2013 (has links)
Production compilers implement optimizing transformation rules for built-in types. What justifies applying these optimizing rules is the axioms that hold for built-in types and the built-in operations supported by these types. Similar axioms also hold for user-defined types and the operations defined on them, and therefore justify a set of optimization rules that may apply to user-defined types. Production compilers, however, do not attempt to construct and apply these optimization rules to user-defined types. Built-in types together the axioms that apply to them are instances of more general algebraic structures. So are user-defined types and their associated axioms. We use the technique of generic programming, a programming paradigm to design efficient, reusable software libraries, to identify the commonality of classes of types, whether built-in or user-defined, convey the semantics of the classes of types to compilers, design scalable and effective program analysis for them, and eventually apply optimizing rules to the operations on them. In generic programming, algorithms and data structures are defined in terms of such algebraic structures. The same definitions are reused for many types, both built-in and user-defined. This dissertation applies generic programming to compiler analyses and transformations. Analyses and transformations are specified for general algebraic structures, and they apply to all types, both built-in and primitive types.
105

Language Implementation by Source Transformation

Dayanand, Pooja 01 February 2008 (has links)
Compilation involves transforming a high level language source program into an equivalent assembly or machine language program. Programming language implementation can therefore be viewed as a source to source transformation from the original high level source code to the corresponding low level assembly language source code. This thesis presents an experiment in implementing an entire programming language system using declarative source transformation. To this end a complete compiler/interpreter is implemented using TXL, a source transformation system. The TXL-based PT Pascal compiler/interpreter is implemented in phases similar to those in a traditional compiler. In the lexical and syntactic analysis phase any lexical and syntactic errors present are detected when the source program is parsed according to the TXL grammar specified. The semantic analysis phase is then run in which semantic checks are performed on the source program and error messages are generated when semantic errors are detected. The source program is also annotated with type information. The typed intermediate code produced by the semantic analysis phase can be directly executed in the execution phase. Alternatively, the intermediate typed source can be transformed into a bytecode instruction sequence by running the code generation phase. This bytecode instruction sequence is then executed by a TXL implementation of an abstract stack machine in the code simulation phase. The TXL-based PT Pascal compiler/interpreter is compared against the traditional S/SL implementation of the PT Pascal compiler. The declarative style of TXL makes the rules and functions in the TXL-based PT Pascal compiler/interpreter easier to understand and the number of lines of code in the TXL implementation is less than in the S/SL implementation. The TXL implementation is however slower and less scalable. The implementation of the TXL-based PT Pascal compiler/interpreter and the advantages and disadvantages of this approach are discussed in greater detail in this thesis. / Thesis (Master, Computing) -- Queen's University, 2008-01-29 19:31:31.454
106

Applying support vector machines to discover just-in-time method-specific compilation strategies

Nabinger Sanchez, Ricardo Unknown Date
No description available.
107

Data mining flow graphs in a dynamic compiler

Jocksch, Adam Unknown Date
No description available.
108

Automated synthesis for program inversion

Hou, Cong 20 September 2013 (has links)
We consider the problem of synthesizing program inverses for imperative languages. Our primary motivation comes from optimistic parallel discrete event simulation (OPDES). There, a simulator must process events while respecting logical temporal event-ordering constraints; to extract parallelism, an OPDES simulator may speculatively execute events and only rollback execution when event-ordering violations occur. In this context, the ability to perform rollback by running time- and space-efficient reverse programs, rather than saving and restoring large amounts of state, can make OPDES more practical. Synthesizing inverses also appears in numerous other software engineering contexts, such as debugging, synthesizing “undo” code, or even generating decompressors automatically given only lossless compression code. This thesis mainly contains three chapters. In the first chapter, we focus on handling programs with only scalar data and arbitrary control flows. By building a value search graph (VSG) that represents recoverability relationships between variable values, we turn the problem of recovering previous values into a graph search one. Forward and reverse programs are generated according to the search results. For any loop that produces an output state given a particular input state, our method can synthesize an inverse loop that reconstructs the input state given the original loop's output state. The synthesis process consists of two major components: (a) building the inverse loop's body, and (b) building the inverse loop's predicate. Our method works for all natural loops, including those that take early exits (e.g., via breaks, gotos, returns). In the second chapter we extend our method to handling programs containing arrays. Based on Array SSA, we develop a modified Array SSA from which we could easily build equalities between arrays and array elements. Specifically, to represent the equality between two arrays, we employ the array subregion as the constraint. During the search those subregions will be calculated to guarantee that all array elements will be retrieved. We also develop a demand-driven method to retrieve array elements from a loop, in which each time we only try to retrieve an array element from an iteration if that element has not been modified in previous iterations. To ensure the correctness of each retrieval, the boundary conditions are created and checked at the entry and the exit of the loop. In the last chapter, we introduce several techniques of handling high-level constructs of C++ programs, including virtual functions, copying C++ objects, C++ STL containers, C++ source code normalization, inter-procedural function calls, etc. Since C++ is an object-oriented (OO) language, our discussion in this chapter can also be extended to other OO languages like Java.
109

The Semantics, Formal Correctness and Implementation of History Variables in an Imperative Programming Language.

Mallon, Ryan Peter Kingsley January 2006 (has links)
Storing the history of objects in a program is a common task. Web browsers remember which websites we have visited, drawing programs maintain a list of the images we have modified recently and the undo button in a wordprocessor allows us to go back to a previous state of a document. Maintaining the history of an object in a program has traditionally required programmers either to write specific code for handling the historical data, or to use a library which supports history logging. We propose that maintaining the history of objects in a program could be simplified by providing support at the language level for storing and manipulating the past versions of objects. History variables are variables in a programming language which store not only their current value, but also the values they have contained in the past. Some existing languages do provide support for history variables. However these languages typically have many limits and restrictions on use of history variables. In this thesis we discuss a complete implementation of history variables in an imperative programming language. We discuss the semantics of history variables for scalar types, arrays, pointers, strings, and user defined types. We also introduce an additional construct called an 'atomic block' which allows us to temporarily suspend the logging of a history variable. Using the mathematical system of Hoare logic we formally prove the correctness of our informal semantics for atomic blocks and each of the history variable types we introduce. Finally, we develop an experimental language and compiler with support for history variables. The language and compiler allow us to investigate the practical aspects of implementing history variables and to compare the performance of history variables with their non- history counterparts.
110

NANOCONTROLLER PROGRAM OPTIMIZATION USING ITE DAGS

Rajachidambaram, Sarojini Priyadarshini 01 January 2007 (has links)
Kentucky Architecture nanocontrollers employ a bit-serial SIMD-parallel hardware design to execute MIMD control programs. A MIMD program is transformed into equivalent SIMD code by a process called Meta-State Conversion (MSC), which makes heavy use of enable masking to distinguish which code should be executed by each processing element. Both the bit-serial operations and the enable masking imposed on them are expressed in terms of if-then-else (ITE) operations implemented by a 1-of-2 multiplexor, greatly simplifying the hardware. However, it takes a lot of ITEs to implement even a small program fragment. Traditionally, bit-serial SIMD machines had been programmed by expanding a fixed bitserial pattern for each word-level operation. Instead, nanocontrollers can make use of the fact that ITEs are equivalent to the operations in Binary Decision Diagrams (BDDs), and can apply BDD analysis to optimize the ITEs. This thesis proposes and experimentally evaluates a number of techniques for minimizing the complexity of the BDDs, primarily by manipulating normalization ordering constraints. The best method found is a new approach in which a simple set of optimization transformations is followed by normalization using an ordering determined by a Genetic Algorithm (GA).

Page generated in 0.0349 seconds