• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 116
  • 33
  • 28
  • 7
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 282
  • 175
  • 115
  • 98
  • 93
  • 68
  • 44
  • 40
  • 40
  • 37
  • 37
  • 36
  • 35
  • 34
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Atomic block formation for explicit data graph execution architectures

Maher, Bertrand Allen 13 December 2010 (has links)
Limits on power consumption, complexity, and on-chip latency have focused computer architects on power-efficient designs that exploit parallelism. One approach divides programs into atomic blocks of operations that execute semi-independently, which efficiently creates a large window of potentially concurrent operations. This dissertation studies the intertwined roles of the compiler, architecture, and microarchitecture in achieving efficiency and high performance with a block-atomic architecture. For such an architecture to achieve high performance the compiler must form blocks effectively. The compiler must create large blocks of instructions to amortize the per-block overhead, but control flow and content restrictions limit the compiler's options. Block formation should consider factors such of frequency of execution, block size such as selecting control-flow paths that are frequently executed, and exploiting locality of computations to reduce communication overheads. This dissertation determines what characteristics of programs influence block formation and proposes techniques to generate effective blocks. The first contribution is a method for solving phase-ordering problems inherent to block formation, mitigating the tension between block-enlarging optimizations---if-conversion, tail duplication, loop unrolling, and loop peeling---as well as scalar optimizations. Given these optimizations, analysis shows that the remaining obstacles to creating larger blocks are inherent in the control flow structure of applications, and furthermore that any fixed block size entails a sizable amount of wasted space. To eliminate this overhead, this dissertation proposes an architectural implementation of variable-size blocks that allow the compiler to dramatically improve block efficiency. We use these mechanisms to develop policies for block formation that achieve high performance on a range of applications and processor configurations. We find that the best policies differ significantly depending on the number of participating cores. Using machine learning, we discover generalized policies for particular hardware configurations and find that the best policy varies significantly between applications and based on the number of parallel resources available in the microarchitecture. These results show that effective and efficient block-atomic execution is possible when the compiler and microarchitecture are designed cooperatively. / text
12

Concurrent and distributed functional systems

Spiliopoulou, Eleni January 2000 (has links)
No description available.
13

Techniques and tools for developing Ruby designs

Guo, Shaori January 1997 (has links)
No description available.
14

An optimizing code generator generator.

Wendt, Alan Lee. January 1989 (has links)
This dissertation describes a system that constructs efficient, retargetable code generators and optimizers. chop reads nonprocedural descriptions of a computer's instruction set and of a naive code generator for the computer, and it writes an integrated code generator and peephole optimizer for it. The resulting code generators are very efficient because they interpret no tables; they are completely hard-coded. Nor do they build complex data structures to communicate between code generation and optimization phases. Interphase communication is reduced to the point that the code generator's output is often encoded in the program counter and conveyed to the optimizer by jumping to the right label. chop's code generator and optimizer are based on a very simple formalism, namely rewriting rules. An instrumented version of the compiler infers the optimization rules as it complies a training suite, and it records them for translation into hard code and inclusion into the production version. I have replaced the Portable C Compiler's code generator with one generated by chop. Despite a costly interface, the resulting compiler runs 30% to 50% faster than the original Portable C Compiler (pcc) and generates comparable code. This figure is diluted by common lexical analysis, parsing, and semantic analysis and by comparable code emission. Allowing for these, the new code generator appears to run approximately seven times faster than that of the original pcc.
15

Practical secure information flow in programming languages

Deng, Zhenyue 22 June 2005 (has links)
If we classify variables in a program into various security levels, then a secure information flow analysis aims to verify statically that information in a program can flow only in ways consistent with the specified security levels. One well-studied approach is to formulate the rules of the secure information flow analysis as a type system. A major trend of recent research focuses on how to accommodate various sophisticated modern language features. However, this approach often leads to overly complicated and restrictive type systems, . making them unfit for practical use. Also, problems essential to practical use, such as type inference and error reporting, have received little attention. This dissertation identified and solved major theoretical and practical hurdles to the application of secure information flow. We adopted a minimalist approach to designing our language to ensure a simple lenient type system. We started out with a small simple imperative language and only added features that we deemed most important for practical use. One language feature we addressed is arrays. Due to the various leaking channels associated with array operations, arrays have received complicated and restrictive typing rules in other secure languages. We presented a novel approach for lenient array operations, which lead to simple and lenient typing of arrays. Type inference is necessary because usually a user is only concerned with the security types for input/output variables of a program and would like to have all types for auxiliary variables inferred automatically. We presented a type inference algorithm B and proved its soundness and completeness. Moreover, algorithm B stays close to the program and the type system and therefore facilitates informative error reporting that is generated in a cascading fashion. Algorithm Band error reporting have been implemented and tested. Lastly, we presented a novel framework for developing applications that ensure user information privacy. In this framework, core computations are defined as code modules that involve input/output data from multiple parties. Incrementally, secure flow policies are refined based on feedback from the type checking/inference. Core computations only interact with code modules from involved parties through well-defined interfaces. All code modules are digitally signed to ensure their authenticity and integrity.
16

Object-oriented concurrent programming on the connection machine with COOL (Concurrent Object-Oriented Language)

Drake, Maria Rosa 10 April 1995 (has links)
The quest for speed and the need to solve ever more complex problems has led to the development of powerful computer systems, such as the Connection Machine. Concurrent processing promises a solution to the problem. COOL (Concurrent Object-Oriented Language) has been developed in order to provide the Connection Machine with a subset of C" which includes several concurrent constructs. The Connection Machine has an inherently parallel architecture which can be taken advantage of with software.
17

An algebraic approach to the design of compilers for object-oriented languages

DURAN, Adolfo Almeida January 2005 (has links)
Made available in DSpace on 2014-06-12T15:54:28Z (GMT). No. of bitstreams: 2 arquivo7152_1.pdf: 1057833 bytes, checksum: 67e3dddb2bcfb41fafccbd6d0086f285 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2005 / Neste trabalho discutimos o projeto de compiladores corretos por construção para linguagens orientadas a objeto. Um compilador correto é aquele que garante que a semântica é preservada quando o programa fonte _e traduzido para a linguagem destino. O projeto de compiladores corretos para linguagens imperativas se encontra bem fundamentado; atualmente, o maior desafio é o desenvolvimento de uma abordagem para lidar com características de orientação a objetos. Nesta tese, descrevemos uma abordagem algébrica para construção de compiladores corretos para uma linguagem orientada a objetos chamada ROOL (acrônimo para Refenement Objecy-oriented Language), que é similar a Java e C++. Esta linguagem inclue classes, herança, ligação dinâmica, recursão, cast e teste de tipos, e visibilidade baseada em classes. Na nossa abordagem, lidamos com o problema de corretude do compilador transformando a tarefa de compilação em uma tarefa de refinamento de programa. O processo de compilação passa ser identificado como sendo a redução de um programa fonte, escrito em um subconjunto executável da linguagem, para uma forma normal. A forma normal é gerada por uma série de transformações que preservam a corretude, e s ao provadas corretas a partir das leis básicas da linguagem; portanto o processo é correto por construção. A maior vantagem da nossa abordagem reside na caracterização do processo de compilação dentro de um sistema uniforme onde as comparações e traduções entre semânticas são evitadas. A redução a forma normal é formalizada como uma álgebra onde a noção central é a de refinamento de programas. Portanto, o produto da compilação é um programa na própria linguagem fonte. Nossa forma normal é um programa na forma de um interpretador, escrito na mesma linguagem fonte, emulando o comportamento da máquina destino. A partir desse interpretador, é que a seqüência das instruções geradas são capturadas. Definimos a Máquina Virtual de ROOL (RVM) como sendo nossa máquina destino; ela _e baseada na Máquina Virtual de Java (JVM) Tal uniformidade implica que todo o cálculo necessário para assegurar a corretude do processo de compilação é realizado em um único sistema de uma linguagem orientada a objetos cuja semântica é dada por leis algébricas. Nenhuma teoria relativa a linguagem fonte ou destino é desenvolvida ou usada no processo. O processo de compilação é justificado por teoremas de redução da forma normal. Existem cinco fases: pré-compilação de classes, redirecionamento de chamada de métodos, simplificação, eliminação de controle e refinamento de dados. Para cada fase, um teorema assegura o resultado esperado. O teorema principal conecta os passos intermediários e estabelece o resultado para todo o processo. Uma vez que os teoremas de redu¢c~ ao pra cada fase são provados corretos a partir das leis básicas de ROOL, eles corroboram para a corretude de todo o processo
18

Reducing a complex instruction set computer.

January 1988 (has links)
Tse Tin-wah. / Thesis (M.Ph.)--Chinese University of Hong Kong, 1988. / Bibliography: leaves [73]-[78]
19

Scaling CFL-reachability-based alias analysis: theory and practice. / 擴展基於CFL-Reachability的別名分析 / Scaling context-free language reachability-based alias analysis / CUHK electronic theses & dissertations collection / Kuo zhan ji yu CFL-Reachability de bie ming fen xi

January 2013 (has links)
Zhang, Qirun. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 170-186). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts also in Chinese.
20

Performance improvement through predicated execution in VLIW machines / Morteza Biglari-Abhari.

Biglari-Abhari, Morteza January 2000 (has links)
Bibliography: leaves 136-153. / xiv, 153 leaves : ill. ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / Investigates techniques to achieve performance improvement in Very Long Instruction Word machines through predicated execution. / Thesis (Ph.D.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, 2000

Page generated in 0.069 seconds