• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • Tagged with
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Entropy: algoritmo de substituição de linhas de cache inspirado na entropia da informação. / Entropy: cache line replacement algorithm inspired in information entropy.

Kobayashi, Jorge Mamoru 07 June 2010 (has links)
Este trabalho apresenta um estudo sobre o problema de substituição de linhas de cache em microprocessadores. Inspirado no conceito de Entropia da Informação proposto em 1948 por Claude E. Shannon, este trabalho propõe uma nova heurística de substituição de linhas de cache. Seu objetivo é capturar e explorar melhor a localidade de referência dos programas e diminuir a taxa de miss rate durante a execução dos programas. O algoritmo proposto, Entropy, utiliza a heurística de entropia da informação para estimar as chances de uma linha ou bloco de cache ser referenciado após ter sido carregado na cache. Uma nova função de decaimento de entropia foi introduzida no algoritmo, otimizando seu funcionamento. Dentre os resultados obtidos, o Entropy conseguiu reduzir em até 50,41% o miss rate em relação ao algoritmo LRU. O trabalho propõe, ainda, uma implementação em hardware com complexidade e custo computacional comparáveis aos do algoritmo LRU. Para uma memória cache de segundo nível com 2-Mbytes e 8-way associative, a área adicional requerida é da ordem de 0,61% de bits adicionais. O algoritmo proposto foi simulado no SimpleScalar e comparado com o algoritmo LRU utilizando-se os benchmarks SPEC CPU2000. / This work presents a study about cache line replacement problem for microprocessors. Inspired in the Information Entropy concept stated by Claude E. Shannon in 1948, this work proposes a novel heuristic to replace cache lines in microprocessors. The major goal is to capture the referential locality of programs and to reduce the miss rate for cache access during programs execution. The proposed algorithm, Entropy, employs that new entropy heuristic to estimate the chances of a cache line to be referenced after it has been loaded into cache. A novel decay function has been introduced to optimize its operation. Results show that Entropy could reduce miss rate up to 50.41% in comparison to LRU. This work also proposes a hardware implementation which keeps computation and complexity costs comparable to the most employed algorithm, LRU. To a 2-Mbytes and 8-way associative cache memory, the required storage area is 0.61% of the cache size. The Entropy algorithm was simulated using SimpleScalar ISA simulator and compared to LRU using SPEC CPU2000 benchmark programs.
2

Entropy: algoritmo de substituição de linhas de cache inspirado na entropia da informação. / Entropy: cache line replacement algorithm inspired in information entropy.

Jorge Mamoru Kobayashi 07 June 2010 (has links)
Este trabalho apresenta um estudo sobre o problema de substituição de linhas de cache em microprocessadores. Inspirado no conceito de Entropia da Informação proposto em 1948 por Claude E. Shannon, este trabalho propõe uma nova heurística de substituição de linhas de cache. Seu objetivo é capturar e explorar melhor a localidade de referência dos programas e diminuir a taxa de miss rate durante a execução dos programas. O algoritmo proposto, Entropy, utiliza a heurística de entropia da informação para estimar as chances de uma linha ou bloco de cache ser referenciado após ter sido carregado na cache. Uma nova função de decaimento de entropia foi introduzida no algoritmo, otimizando seu funcionamento. Dentre os resultados obtidos, o Entropy conseguiu reduzir em até 50,41% o miss rate em relação ao algoritmo LRU. O trabalho propõe, ainda, uma implementação em hardware com complexidade e custo computacional comparáveis aos do algoritmo LRU. Para uma memória cache de segundo nível com 2-Mbytes e 8-way associative, a área adicional requerida é da ordem de 0,61% de bits adicionais. O algoritmo proposto foi simulado no SimpleScalar e comparado com o algoritmo LRU utilizando-se os benchmarks SPEC CPU2000. / This work presents a study about cache line replacement problem for microprocessors. Inspired in the Information Entropy concept stated by Claude E. Shannon in 1948, this work proposes a novel heuristic to replace cache lines in microprocessors. The major goal is to capture the referential locality of programs and to reduce the miss rate for cache access during programs execution. The proposed algorithm, Entropy, employs that new entropy heuristic to estimate the chances of a cache line to be referenced after it has been loaded into cache. A novel decay function has been introduced to optimize its operation. Results show that Entropy could reduce miss rate up to 50.41% in comparison to LRU. This work also proposes a hardware implementation which keeps computation and complexity costs comparable to the most employed algorithm, LRU. To a 2-Mbytes and 8-way associative cache memory, the required storage area is 0.61% of the cache size. The Entropy algorithm was simulated using SimpleScalar ISA simulator and compared to LRU using SPEC CPU2000 benchmark programs.
3

A compiler-based leakage reduction technique by power-gating functional units in embedded microprocessors

Roy, Soumyaroop 01 June 2006 (has links)
Power-gating is a technique investigated widely for reducing leakage energy in the functional units of microprocessors at the architectural level. Effective power-gating involves deactivating idle functional units for sustained periods incurring little or no performance degradation. Accurate prediction of long idle periods is essential, which, in turn, depends on the application program characteristics.In this thesis, we propose a compiler-based leakage reduction technique for embedded architectures by exploiting the well-known attributes of embedded applications, namely, small code size and intensive loops. From the control flow graph (CFG) representation of the source program, we construct a forest of loop hierarchy trees (LHTs), which capture the nesting loop properties of the program. As an LHT satisfies the partial ordering on the loop nesting, we exploit this property to identify maximal subgraphs (of functional unit idleness) in the original program. For each subgraph so found, a sleep instruction is introduced at the entry point of the corresponding code segement, thus optimizing the number of sleep instructions. The sleep instruction has one operand, a bit-vector comprised of ON/OFF controlbits for all functional units in the data path. Our target architecture is a modified ARM processor model comprising of functional units with power-gating ability. We obtained an average leakage energy reduction of 34.1% for 12 benchmarks chosen from the MiBench suite, with range of 19.5% and standard deviation of 6.5%.
4

A Comparison of Three Computer System Simulators

Urdén, Ulf January 2004 (has links)
This thesis is a comparative study of three computer system simulators. These computer programs are commonly used to test the efficiency and feasibility of new computer architectures, as well for debugging and testing software. With this study, we evaluate the fundamental differences of three simulators: SimICS, SimpleScalar and ML-RSIM. A comprehensive study of simulation techniques is presented, and each evaluated simulator is classified using those premises. Quantification the performance differences using a benchmark suite is made. The results show that the most feature-rich of the simulators also seems to have the highest performance in the group.
5

Evaluation of a method for identifying timing models

Kahsu, Lidia January 2012 (has links)
In today’s world, embedded systems which have very large and highly configurable software systems, consisting of hundreds of tasks with huge lines of code and mostly with real-time constraints, has replaced the traditional systems. Generally in real-time systems, the WCET of a program is a crucial component, which is the longest execution time of a specified task. WCET is determined by WCET analysis techniques and the values produced should be tight and safe to ensure the proper timing behavior of a real-time system. Static WCET is one of the techniques to compute the upper bounds of the execution time of programs, without actually executing the programs but relying on mathematical models of the software and the hardware involved. Mathematical models can be used to generate timing estimations on source code level when the hardware is not yet fully accessible or the code is not yet ready to compile. In this thesis, the methods used to build timing models developed by WCET group in MDH have been assessed by evaluating the accuracy of the resulting timing models for a number of combinations of hardware architecture. Furthermore, the timing model identification is extended for various hardware platforms, like advanced architecture with cache and pipeline and also included floating-point instructions by selecting benchmarks that uses floating-points as well.

Page generated in 0.0623 seconds