• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 116
  • 33
  • 28
  • 7
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 282
  • 175
  • 115
  • 98
  • 93
  • 68
  • 44
  • 40
  • 40
  • 37
  • 37
  • 36
  • 35
  • 34
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Dynamic Register Allocation for Network Processors

Collins, Ryan 22 May 2006 (has links)
Network processors are custom high performance embedded processors deployed for a variety of tasks that must operate at high line (Gbits/sec) speeds to prevent packet loss. With the increase in complexity of application domains and larger code store on modern network processors, the network processor programming goes beyond simply exploiting parallelism in packet processing. Unlike the traditional homogeneous threading model, modern network processor programming must support heterogenous threads that execute simultaneously on a microengine. In order to support such demands, we first propose hardware management of registers across multiple threads. In their PLDI 2004 paper, Zhuang and Pande for the first time proposed a compiler based scheme to support register allocation across threads; in this work, we extend their static allocation method to support aggressive register allocation taking dynamic context into account. We also remove the load/stores due to aliased memory accesses converting them into register moves exploiting dead registers. This results in tremendous savings in latency and higher throughput mainly due to the removal of high latency accesses as well as idle cycles. The dynamic register allocator is designed to be light-weight and low latency by undertaking many tradeoffs. In the second part of this work, our goal is to design an automatic register allocation scheme that makes compiler transperant to dual bank register file design for network processors. By design network processors mandate that the operands of an instruction must be allocated to registers belonging to two different banks. The key goal in this work is to take into account dynamic contexts to balance the register pressure across the banks. Key decisions made involve, how and where to map incoming virtual register on a physical register in the bank, how to evict dead ones, and how to minimally undertake bank to bank copies and swaps. It is shown that it is viable to solve both of these problems by simple hardware designs that avail of dynamic contexts. The performance gains are substantial and due to simplicity of the designs (which are also off critical paths) such schemes may be attractive in practice.
32

CCC86, a generic 8086 C-language cross compiler plus communication package / Generic 8086 C-language cross compiler plus communication package. / CCC86.

Hessaraki, Alireza January 1987 (has links)
The Cross Compiler is an excellent and valuable program development tool. It provides to the user a low level compiled language that allows character (byte), integer (8086 word) and pointer (8086 one word address) manipulation. It also allows recursion, has modern flow and a rich set of operators.The Communication Program which include file transfer utility allows the student to download or upload their C program to a PC. It allows use of the Modem. The file transferring can be done using XON/XOFF or XMODEM. It also supports INS 8250 UART chip, plus 16450 high speed device found in hardware such as IBM AT Serial/Parallel Adapter. / Department of Computer Science
33

Compiler construction for a simple Pascal-like language

Moon, Hae-Kyung January 1994 (has links)
In this thesis a compiler called SPASCAL is implemented which translates source programs in a simple Pascal-like language called SPASCAL into target programs in the VAX assembly language. This thesis clearly describes the main aspects of a compiler: lexical analysis and syntactic analysis, including the symbol-table routines and the error-handling routines. This thesis uses regular expressions to define the lexical structure and a context-free grammar to define the syntactic structure of SPASCAL. The compiler is constructed using syntax-directed translation, context-free grammars and a set of semantic rules. SPASCAL Compiler is written with standard C in UNIX. / Department of Computer Science
34

Correct implementation of network protocols

McGuire, Tommy Marcus 28 August 2008 (has links)
Not available / text
35

Incorporating domain-specific information into the compilation process

Guyer, Samuel Zev 29 June 2011 (has links)
Not available / text
36

Compiler development for fixed-point processors

Baudendistel, Kurt 12 1900 (has links)
No description available.
37

A compiler framework for multithreaded parallel systems

Hopper, Michael A. 12 1900 (has links)
No description available.
38

Design of a retargetable compiler for digital signal processors

De, Subrato Kumar 05 1900 (has links)
No description available.
39

Automated design of high performance digital filter chips

McAllister, Christine Joan January 1996 (has links)
No description available.
40

Specialising dynamic techniques for implementing the Ruby programming language

Seaton, Christopher Graham January 2015 (has links)
The Ruby programming language is dynamically typed, uses dynamic and late bound dispatch for all operators, method calls and many control structures, and provides extensive metaprogramming and introspective tooling functionality. Unlike other languages where these features are available, in Ruby their use is not avoided and key parts of the Ruby ecosystem use them extensively, even for inner-loop operations. This makes a high-performance implementation of Ruby problematic. Existing implementations either do not attempt to dynamically optimise Ruby programs, or achieve relatively limited success in optimising Ruby programs containing these features. One way that the community has worked around the limitations of existing Ruby implementations is to write extension modules in the C programming language. These are statically compiled and then dynamically linked into the Ruby implementation. Compared to equivalent Ruby, this C code is often more efficient for computationally intensive code. However the interface that these C extensions provides is defined by the non-optimising reference implementation of Ruby. Implementations which want to optimise by using different internal representations must do extensive copying to provide the same interface. This then limits the performance of the C extensions in those implementations. This leaves Ruby in the difficult position where it is not only difficult to implement the language efficiently, but the previous workaround for that problem, C extensions, also limits efforts to improve performance. This thesis describes an implementation of the Ruby programming language which embraces the Ruby language and optimises specifically for Ruby as it is used in practice. It provides a high performance implementation of Ruby's dynamic features, at the same time as providing a high performance implementation of C extensions. The implementation provides a high level of compatibility with existing Ruby implementations and does not limit the available features in order to achieve high performance. Common to all the techniques that are described in this thesis is the concept of specialisation. The conventional approach taken to optimise a dynamic language such as Ruby is to profile the program as it runs. Feedback from the profiling can then be used to specialise the program for the data and control flow it is actually experiencing. This thesis extends and advances that idea by specialising for conditions beyond normal data and control flow. Programs that call a method, or lookup a variable or constant by dynamic name rather than literal syntax can be specialised for the dynamic name by generalising inline caches. Debugging and introspective tooling is implemented by specialising the code for debug conditions such as the presence of a breakpoint or an attached tracing tool. C extensions are interpreted and dynamically optimised rather than being statically compiled, and the interface which the C code is programmed against is provided as an abstraction over the underlying implementation which can then independently specialise. The techniques developed in this thesis have a significant impact on performance of both synthetic benchmarks and kernels from real-world Ruby programs. The implementation of Ruby which has been developed achieves an order of magnitude or better increase in performance compared to the next-best implementation. In many cases the techniques are 'zero-overhead', in that the generated machine code is exactly the same for when the most dynamic features of Ruby are used, as when only static features are used.

Page generated in 0.0492 seconds