• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 22
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 88
  • 72
  • 70
  • 52
  • 26
  • 15
  • 14
  • 13
  • 13
  • 10
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Adaptive predication via compiler-microarchitecture cooperation

Kim, Hyesoon, 1974- 28 August 2008 (has links)
Not available / text
62

Area efficient PLA's for the recognition of regular expression languages

Chandrasekhar, Muthyala. January 1985 (has links)
No description available.
63

Compiling for a Multithreaded Horizontally-microcoded Soft Processor Family

Tili, Ilian 28 November 2013 (has links)
Soft processing engines make FPGA programming simpler for software programmers. TILT is a multithreaded soft processing engine that contains multiple deeply pipelined and varying latency functional units. In this thesis, we present a compiler framework for compiling and scheduling for TILT. By using the compiler to generate schedules and manage hardware we create computationally dense designs (high throughput per hardware area) which make compelling processing engines. High schedule density is achieved by mixing instructions from different threads and by prioritizing the longest path of data flow graphs. Averaged across benchmark kernels we can achieve 90% of the theoretical throughput, and can reduce the performance gap relative to custom hardware from 543x for a scalar processor to only 4.41x by replicating TILT cores up to a comparable area cost. We also present methods of quickly navigating the design space and predicting the area of hardware configurations.
64

Compiling for a Multithreaded Horizontally-microcoded Soft Processor Family

Tili, Ilian 28 November 2013 (has links)
Soft processing engines make FPGA programming simpler for software programmers. TILT is a multithreaded soft processing engine that contains multiple deeply pipelined and varying latency functional units. In this thesis, we present a compiler framework for compiling and scheduling for TILT. By using the compiler to generate schedules and manage hardware we create computationally dense designs (high throughput per hardware area) which make compelling processing engines. High schedule density is achieved by mixing instructions from different threads and by prioritizing the longest path of data flow graphs. Averaged across benchmark kernels we can achieve 90% of the theoretical throughput, and can reduce the performance gap relative to custom hardware from 543x for a scalar processor to only 4.41x by replicating TILT cores up to a comparable area cost. We also present methods of quickly navigating the design space and predicting the area of hardware configurations.
65

A compiler for the LMT music transcription language/

Adler, Stuart Philip January 1974 (has links)
No description available.
66

Run-time comparison C++ vs. Java

Jones, Linwood D. January 1999 (has links)
C++ is one of the most commonly used programming languages in academic and professional environments. Java is a relatively new language that is rapidly gaining popularity and acceptance. Java's designers claim that Java offers all the functionality of C++ and more. Java's syntax is similar to C++, but Java code is not compatible with C++. Java offers platform independence and better support for internet-oriented applications. Platform independence may come at a price. A major concern regarding any language is performance.This thesis looks at the performance of Java and C++. A comparison is made of C++ and Java runtimes for a simple algorithm (bubblesort). It covers the differences in compilation of an application developed in C++ versus an application developed in Java. It reports the execution time of an algorithm written in both languages. / Department of Computer Science
67

Design and implementation of tool-chain framework to support OpenMP single source compilation on cell platform

Jiang, Yi. January 2008 (has links)
Thesis (M.S.)--University of Delaware, 2007. / Principal faculty advisor: Guang R. Gao, Dept. of Electrical and Computer Engineering. Includes bibliographical references.
68

Adaptive predication via compiler-microarchitecture cooperation

Kim, Hyesoon, January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2007. / Vita. Includes bibliographical references.
69

Otimização em loops no Projeto Xingo / Loops optimization for Xingo Project

Blasi Junior, Francisco 23 May 2005 (has links)
Orientador: Rodolfo Jardim de Azevedo / Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-05T13:01:34Z (GMT). No. of bitstreams: 1 BlasiJunior_Francisco_M.pdf: 934725 bytes, checksum: 6a13eaaaee52e0aabb1a74f3c2c4669f (MD5) Previous issue date: 2005 / Resumo: As otimizações implementadas em compiladores proporcionam uma melhora significativa de desempenho dos programas. Em muitos casos, proporcionam também a redução do tamanho do programa. Quase todos os programas em produção são compilados com diretivas de otimização, para obter máximo desempenho.Para o estudo de novas técnicas de otimização, faz-se necessário um ambiente de testes no qual essas técnicas possam ser incorporadas facilmente. O projeto Xingó foi desenvolvido com esse intuito. Gerando código C compilável, o Xingó proporciona facilmente a verificação do resultado das otimizações implementadas.Este trabalho mostra a implementa¸c¿ao de algumas otimizações em loops no projeto Xingó, demonstrando a viabilidade de novas otimizações serem incorporadas. Além disso, este trabalho analisa o resultado da utiliza¸c¿ao de ferramentas disponíveis no mercado que verificam a corretude de cada uma das otimizações e que avaliam o desempenho do sistema com as otimizações implementadas / Abstract: Software performance is signifcantly improved by the optimizations implemented on the compilers. In some cases, the compiler optimizations also reduces the size of the software.It is necessary to have a test environment in order to study the result of optimization technics. The Xingó project was developed with such a concept in mind. By generating C compilable code, Xingó allows easy visualization of the results of new optimization technics.This work shows the implementation of some loop optimizations on the Xingó project, demonstrating that it can incorporate new optimizations. Besides that, this work shows the results from the usage of available tools that checks each optimization correctness and also tools that analyses the performance of the system with the optimizations incorporated. / Mestrado / Engenharia de Computação / Mestre em Computação
70

Estudo e implementação da otimização de Preload de dados usando o processador XScale / Study and implementation of data Preload optimization using XScale

Oliveira, Marcio Rodrigo de 08 October 2005 (has links)
Orientador: Guido Costa Souza Araujo / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-06T14:27:52Z (GMT). No. of bitstreams: 1 Oliveira_MarcioRodrigode_M.pdf: 1563381 bytes, checksum: 52e2e029998b3539a26f5c2b76284d88 (MD5) Previous issue date: 2005 / Resumo: Atualmente existe um grande mercado para o desenvolvimento de aplicações para sistemas embutidos, pois estes estão fazendo parte crescente do cotidiano das pessoas em produtos de eletrônica de consumo como telefones celulares, palmtop's, agendas eletrônicas, etc. Os produtos de eletrônica de consumo possuem grandes restrições de projeto, tais como custo reduzido, baixo consumo de potência e muitas vezes alto desempenho. Deste modo, o código produzido pelos compiladores para os programas executados nestes produtos, devem executar rapidamente, economizando energia de suas baterias. Estes melhoramentos são alcançados através de transformações no programa fonte chamadas de otimizações de código. A otimização preload de dados consiste em mover dados de um alto nível da hierarquia de memória para um baixo nível dessa hierarquia antes deste dado ser usado. Este é um método que pode reduzir a penalidade da latência de memória. Este trabalho mostra o desenvolvimento da otimização de preload de dados no compilador Xingo para a plataforma Pocket PC, cuja arquitetura possui um processador XScale. A arquitetura XScale possui a instrução preload, cujo objetivo é fazer uma pré-busca de dados para a cache. Esta otimização insere (através de previsões) a instrução preload no código intermediário do programa fonte, tentando prever quais dados serão usados e que darão miss na cache (trazendo-os para esta cache antes de seu uso). Com essa estratégia, tenta-se minimizar a porcentagem de misses na cache de dados, reduzindo o tempo gasto em acessos à memória. Foram usados neste trabalho vários programas de benchmarks conhecidos para a avaliação dos resultados, dentre eles destacam-se DSPstone e o MiBench. Os resultados mostram que esta otimização de preload de dados para o Pocket PC produz um aumento considerável de desempenho para a maioria dos programa testados, sendo que em vários programas observou-se uma melhora de desempenho maior que 30%! / Abstract: Nowadays, there is a big market for applications for embedded systems, in products as celIular phones, palmtops, electronic schedulers, etc. Consumer electronics are designed under stringent design constraints, like reduced cost, low power consumption and high performance. This way, the code produced by compiling programs to execute on these products, must execute quickly, and also should save power consumption. In order to achieve that, code optimizations must be performed at compile time. Data preload consists of moving data from a higher leveI of the memory hierarchy to a lower leveI before data is actualIy needed, thus reducing memory latency penalty. This dissertation shows how data preload optimization was implemented into the Xingo compiler for the Pocket PC platform, a XScale based processor. The XScale architecture has a preload instruction, whose main objective is to prefetch program data into cache. This optimization inserts (through heuristics) preload instructions into the program source code, in order to anticipate which data will be used. This strategy minimizes cache misses, allowing to reduce the cache miss latency while running the program code. Some benchmark programs have been used for evaluation, like DSPstone and MiBench. The results show a considerable performance improvement for almost alI tested programs, subject to the preload optimization. Many of the tested programs achieved performance improvements larger than 30% / Mestrado / Otimização de Codigo / Mestre em Ciência da Computação

Page generated in 0.0519 seconds