• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 9
  • 8
  • 1
  • 1
  • 1
  • Tagged with
  • 93
  • 93
  • 93
  • 30
  • 29
  • 29
  • 19
  • 19
  • 18
  • 17
  • 16
  • 16
  • 15
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Data mining flow graphs in a dynamic compiler

Jocksch, Adam Paul. January 2009 (has links)
Thesis (M. Sc.)--University of Alberta, 2009. / Title from PDF file main screen (viewed on Oct. 21, 2009). "A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Master of Science, Department of Computing Science, University of Alberta." Includes bibliographical references.
32

Automatically proving the correctness of program analyses and transformations /

Lerner, Sorin. January 2006 (has links)
Thesis (Ph. D.)--University of Washington, 2006. / Vita. Includes bibliographical references (p. 129-140).
33

A compiler optimization framework for Concordia parallel C /

Liang, Wen. January 1998 (has links)
Thesis (M.Comp.Sc.)--Dept. of Computer Science, Concordia University, 1998. / "September 1998" Includes bibliographical references (leaves 100-104). Available also on the Internet.
34

Design automation of a machine-independent code generator

Clayton, Peter Graham 22 January 2013 (has links)
As both computer languages and architectures continue to proliferate, there is a continuing need for new compilers. Researchers have attempted to ease the work of producing compilers by developing methods to automate compiler writing. While much work has been done (and considerable success achieved) in writing parsers which can handle a variety of source languages (using mainly table-driven analysis methods), less progress has been made in formalizing the code generation end of the compiler. Nevertheless, some of the more recent publications in code generation stress portability or retargetability of the resulting compiler. A number of code generator synthesisers have been developed, some of which produce code that can be compared in quality with that produced by a conventional code generator. However, because of the complexity of generalizing the mapping from source language to target machine, and the need for efficiency of various kinds, code generator synthesisers are large, complicated programs. Consequently, the person who develops a code generator using one of these tools invariably needs to be a code generation specialist himself. Many compilers follow a pattern of having a front end which generates intermediate code, and a back end which converts intermediate code to machine code. The intermediate code is effectively machine independent, or can be designed that way. With these points in mind, we have set out to write a system of programs which -- 1. will allow the generation of such a back end in a reasonably short time, for a general intermediate code, and for a general machine code, and -- 2. can be used by anyone who has a sound knowledge of the target machine's architecture and associated assembler language, but is not necessarily a specialist compiler writer. The system consists of a series of friendly, interactive programs by means of which the user sets up tables defining the architecture and assembly level instructions for the target machine, and the code templates onto which intermediate codes produced by a parser have been mapped. A general notation has been developed to represent machine instructions using the same format as the target assembler. Thus the code generator writer is able to write code sequences to perform the effects of the intermediate codes, using assembly mnemonics familiar to him. The resultant table-driven code generator simply replaces a sequence of intermediate codes by their respective code templates, relocating them in memory and filling in addresses known only at code-generation time. This thesis describes the use and implementation details of this generalized code generation system. As an example, the implementation of a code generator for a CLANG [23] parser on an 8080 processor is described. The discussion also includes guide-lines on how to implement a loader and associated run-time routines for use in executing the object code. The results of a number of bench-marks have shown, as expected, that code produced by a code generator developed in this manner is larger and slower than that from a special purpose optimizing code generator, but is still several times faster than interpreting the intermediate code. The major benefit to be gained from using this system lies in the shorter development time by a less skilled person. / KMBT_223 / Adobe Acrobat 9.53 Paper Capture Plug-in
35

Semi-automatic protocol implementation using an Estelle-C compiler, LAPB and RTS protocols as examples

Lu, Jing January 1990 (has links)
Formal Description Techniques allow for the use of automated tools during the specification and development of communication protocols. Estelle is a standardized formal description technique developed by ISO to remove ambiguities in the specification of communication protocols and services. The UBC Estelle-C compiler automates the implementation of protocols by producing an executable C implementation directly from its Estelle specification. In this thesis, we investigate the automated protocol implementation methodology using the Estelle-C compiler. First, we describe the improvements made to the compiler to support the latest version of Estelle. Then, we present and discuss the semiautomated implementations of the LAPB protocol in the CCITT X.25 Recommendation and the RTS protocol in the CCITT X.400 MHS series using this compiler. Finally, we compare the automatic and manual protocol implementations of LAPB and RTS protocols in terms of functional coverage, development time, code size, and performance measure. The results strongly indicate the overall advantages of automatic protocol implementation method over the manual approach. / Science, Faculty of / Computer Science, Department of / Graduate
36

VPI PROLOG compiler project report

Deighan, John 26 January 2010 (has links)
see document / Master of Science
37

Software-assisted data prefetching algorithms.

January 1995 (has links)
by Chi-sum, Ho. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 110-113). / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Overview --- p.1 / Chapter 1.2 --- Cache Memories --- p.1 / Chapter 1.3 --- Improving Cache Performance --- p.3 / Chapter 1.4 --- Improving System Performance --- p.4 / Chapter 1.5 --- Organization of the dissertation --- p.6 / Chapter 2 --- Related Work --- p.8 / Chapter 2.1 --- Cache Performance --- p.8 / Chapter 2.2 --- Non-Blocking Cache --- p.9 / Chapter 2.3 --- Cache Prefetching --- p.10 / Chapter 2.3.1 --- Hardware Prefetching --- p.10 / Chapter 2.3.2 --- Software-assisted Prefetching --- p.13 / Chapter 2.3.3 --- Improving Cache Effectiveness --- p.22 / Chapter 2.4 --- Other Techniques to Reduce and Hide Memory Latencies --- p.25 / Chapter 2.4.1 --- Register Preloading --- p.25 / Chapter 2.4.2 --- Write Policies --- p.26 / Chapter 2.4.3 --- Small Specialized Cache --- p.26 / Chapter 2.4.4 --- Program Transformation --- p.27 / Chapter 3 --- Stride CAM Prefetching --- p.30 / Chapter 3.1 --- Introduction --- p.30 / Chapter 3.2 --- Architectural Model --- p.32 / Chapter 3.2.1 --- Compiler Support --- p.33 / Chapter 3.2.2 --- Hardware Support --- p.35 / Chapter 3.2.3 --- Model Details --- p.39 / Chapter 3.3 --- Optimization Issues --- p.39 / Chapter 3.3.1 --- Eliminating Reductant Prefetching --- p.40 / Chapter 3.3.2 --- Code Motion --- p.40 / Chapter 3.3.3 --- Burst Mode --- p.44 / Chapter 3.3.4 --- Stride CAM Overflow --- p.45 / Chapter 3.3.5 --- Effects of Loop Optimizations --- p.46 / Chapter 3.4 --- Practicability --- p.50 / Chapter 3.4.1 --- Evaluation Methodology --- p.51 / Chapter 3.4.2 --- Prefetch Accuracy --- p.54 / Chapter 3.4.3 --- Stride CAM Size --- p.56 / Chapter 3.4.4 --- Software Overhead --- p.60 / Chapter 4 --- Stride Register Prefetching --- p.67 / Chapter 4.1 --- Motivation --- p.67 / Chapter 4.2 --- Architectural Model --- p.67 / Chapter 4.2.1 --- Stride Register --- p.69 / Chapter 4.2.2 --- Compiler Support --- p.70 / Chapter 4.2.3 --- Prefetch Bits --- p.72 / Chapter 4.2.4 --- Operation Details --- p.77 / Chapter 4.3 --- Practicability and Optimizations --- p.78 / Chapter 4.3.1 --- Practicability on NASA7 Benchmark Programs --- p.78 / Chapter 4.3.2 --- Optimization Issues --- p.81 / Chapter 4.4 --- Comparison Between Stride CAM and Stride Register Models --- p.84 / Chapter 5 --- Small Software-Driven Array Cache --- p.87 / Chapter 5.1 --- Introduction --- p.87 / Chapter 5.2 --- Cache Pollution in MXM --- p.88 / Chapter 5.3 --- Architectural Model --- p.89 / Chapter 5.3.1 --- Operation Details --- p.91 / Chapter 5.4 --- Effectiveness of Array Cache --- p.92 / Chapter 6 --- Conclusion --- p.96 / Chapter 6.1 --- Conclusion --- p.96 / Chapter 6.2 --- Future Research: An Extension of the Stride CAM Model --- p.97 / Chapter 6.2.1 --- Background --- p.97 / Chapter 6.2.2 --- Reference Address Series --- p.98 / Chapter 6.2.3 --- Extending the Stride CAM Model --- p.100 / Chapter 6.2.4 --- Prefetch Overhead --- p.109 / Bibliography --- p.110 / Appendix --- p.114 / Chapter A --- Simulation Results - Stride CAM Model --- p.114 / Chapter A.l --- Execution Time --- p.114 / Chapter A.1.1 --- BTRIX --- p.114 / Chapter A.1.2 --- CFFT2D --- p.115 / Chapter A.1.3 --- CHOLSKY --- p.116 / Chapter A.1.4 --- EMIT --- p.117 / Chapter A.1.5 --- GMTRY --- p.118 / Chapter A.1.6 --- MXM --- p.119 / Chapter A.1.7 --- VPENTA --- p.120 / Chapter A.2 --- Memory Delay --- p.122 / Chapter A.2.1 --- BTRIX --- p.122 / Chapter A.2.2 --- CFFT2D --- p.123 / Chapter A.2.3 --- CHOLSKY --- p.124 / Chapter A.2.4 --- EMIT --- p.125 / Chapter A.2.5 --- GMTRY --- p.126 / Chapter A.2.6 --- MXM --- p.127 / Chapter A.2.7 --- VPENTA --- p.128 / Chapter A.3 --- Overhead --- p.129 / Chapter A.3.1 --- BTRIX --- p.129 / Chapter A.3.2 --- CFFT2D --- p.130 / Chapter A.3.3 --- CHOLSKY --- p.131 / Chapter A.3.4 --- EMIT --- p.132 / Chapter A.3.5 --- GMTRY --- p.133 / Chapter A.3.6 --- MXM --- p.134 / Chapter A.3.7 --- VPENTA --- p.135 / Chapter A.4 --- Hit Ratio --- p.136 / Chapter A.4.1 --- BTRIX --- p.136 / Chapter A.4.2 --- CFFT2D --- p.137 / Chapter A.4.3 --- CHOLSKY --- p.137 / Chapter A.4.4 --- EMIT --- p.138 / Chapter A.4.5 --- GMTRY --- p.139 / Chapter A.4.6 --- MXM --- p.139 / Chapter A.4.7 --- VPENTA --- p.140 / Chapter B --- Simulation Results - Array Cache --- p.141 / Chapter C --- NASA7 Benchmark --- p.145 / Chapter C.1 --- BTRIX --- p.145 / Chapter C.2 --- CFFT2D --- p.161 / Chapter C.2.1 --- cfft2dl --- p.161 / Chapter C.2.2 --- cfft2d2 --- p.169 / Chapter C.3 --- CHOLSKY --- p.179 / Chapter C.4 --- EMIT --- p.192 / Chapter C.5 --- GMTRY --- p.205 / Chapter C.6 --- MXM --- p.217 / Chapter C.7 --- VPENTA --- p.220
38

Compiler-assisted Adaptive Software Testing

Petsios, Theofilos January 2018 (has links)
Modern software is becoming increasingly complex and is plagued with vulnerabilities that are constantly exploited by attackers. The vast numbers of bugs found in security-critical systems and the diversity of errors presented in commercial off-the-shelf software require effective, scalable testing frameworks. Unfortunately, the current testing ecosystem is heavily fragmented, with the majority of toolchains targeting limited classes of errors and applications without offering provably strong guarantees. With software codebases continuously becoming more diverse and complex, the large-scale deployment of monolithic, non-adaptive analysis engines is likely to increase the aforementioned fragmentation. Instead, modern software testing requires adaptive, hybrid techniques that target errors selectively. This dissertation argues that adopting context-aware analyses will enable us to set the foundations for retargetable testing frameworks while further increasing the accuracy and extensibility of existing toolchains. To this end, we initially examine how compiler analyses can become context-aware, prioritizing certain errors over others of the same type. As a use case of our proposed approach, we extend a state-of-the-art compiler's integer error detection pipeline to suppress reports of benign errors by up to 89% in real-world workloads, while allowing for reporting of serious errors. Subsequently, we demonstrate how compiler-based instrumentation can be utilized by feedback-driven evolutionary fuzzers to provide multifaceted analyses targeting broader classes of bugs. In this direction, we present differential diversity (δ-diversity), we propose a generic methodology for offering state-aware guidance in feedback-driven frameworks, and we demonstrate how to retrofit state-of-the-art fuzzers to target broader classes of errors. We provide two such prototype implementations: NEZHA, the first differential generic fuzzer capable of handling logic bugs, as well as SlowFuzz, the first generic fuzzer targeting complexity vulnerabilities. We applied both prototypes on production software, and demonstrate their effectiveness. We found that NEZHA discovered hundreds of logic discrepancies across a wide variety of applications (SSL/TLS libraries, parsers, etc.), while SlowFuzz successfully generated inputs triggering slowdowns in complex, real-world software, including zip parsers, regular expression libraries, and hash table implementations.
39

A library for doing polyhedral operations

Wilde, Doran K. 06 December 1993 (has links)
Polyhedra are geometric representations of linear systems of equations and inequalities. Since polyhedra are used to represent the iteration domains of nested loop programs, procedures for operating on polyhedra can be used for doing loop transformations and other program restructuring transformations which are needed in parallelizing compilers. Thus a need for a library of polyhedral operations has recently been recognized in the parallelizing compiler community. Polyhedra are also used in the definition of domains of variables in systems of affine recurrence equations (SARE). ALPHA is a language which is based on the SARE formalism in which all variables are declared over polyhedral domains consisting of finite unions of polyhedra. This thesis describes a library of polyhedral functions which was developed to support the ALPHA langauge environment, and which is general enough to satisfy the needs of researchers doing parallelizing compilers. This thesis describes the data structures used to represent domains, gives the motivations for the major design decisions that were made in creating the library, and presents the algorithms used for doing polyhedral operations. A new algorithm for recursively generating the face lattice of a polyhedron is also presented. This library has been written and tested, and has be in use since the first quarter of 1993. It is used by research facilities in Europe and Canada which do research in parallelizing compilers and systolic array synthesis. The library is freely distributed by ftp. / Graduation date: 1994
40

CC-MPI, a compiled communication capable MPI prototype for ethernet switched clusters

Karwande, Amit V. Yuan, Xin. January 2003 (has links)
Thesis (M.S.)--Florida State University, 2003. / Advisor: Dr. Xin Yuan, Florida State University, College of Arts and Sciences, Dept. of Computer Science. Title and description from dissertation home page (viewed Oct. 3, 2003). Includes bibliographical references.

Page generated in 0.099 seconds