• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 128
  • 18
  • 18
  • 6
  • 3
  • 1
  • 1
  • Tagged with
  • 190
  • 190
  • 121
  • 65
  • 48
  • 44
  • 35
  • 27
  • 20
  • 18
  • 16
  • 13
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Optimization of instruction memory for embedded systems

Janapsatya, Andhi, Computer Science & Engineering, Faculty of Engineering, UNSW January 2005 (has links)
This thesis presents methodologies for improving system performance and energy consumption by optimizing the memory hierarchy performance. The processor-memory performance gap is a well-known problem that is predicted to get worse, as the performance gap between processor and memory is widening. The author describes a method to estimate the best L1 cache configuration for a given application. In addition, three methods are presented to improve the performance and reduce energy in embedded systems by optimizing the instruction memory. Performance estimation is an important procedure to assess the performance of the system and to assess the effectiveness of any applied optimizations. A cache memory performance estimation methodology is presented in this thesis. The methodology is designed to quickly and accurately estimate the performance of multiple cache memory configurations. Experimental results showed that the methodology is on average 45 times faster compared to a widely used tool (Dinero IV). The first optimization method is a software-only method, called code placement, was implemented to improve the performance of instruction cache memory. The method involves careful placement of code within memory to ensure high cache hit rate when code is brought into the cache memory. Code placement methodology aims to improve cache hit rates to improve cache memory performance. Experimental results show that by applying the code placement method, a reduction in cache miss rate by up to 71%, and energy consumption reduction of up to 63% are observed when compared to application without code placement. The second method involves a novel architecture for utilizing scratchpad memory. The scratchpad memory is designed as a replacement of the instruction cache memory. Hardware modification was designed to allow data to be written into the scratchpad memory during program execution, allowing dynamic control of the scratchpad memory content. Scratchpad memory has a faster memory access time and a lower energy consumption per access compared to cache memory; the usage of scratchpad memory aims to improve performance and lower energy consumption of systems compared to system with cache memory. Experimental results show an average energy reduction of 26.59% and an average performance improvement of 25.63% when compared to a system with cache memory. The third is an application profiling method using statistical information to identify application???s hot-spots. Application profiling is important for identifying section in the application where performance degradation might occur and/or where maximum performance gain can be obtained through optimization. The method was applied and tested on the scratchpad based system described in this thesis. Experimental results show the effectiveness of the analysis method in reducing energy and improving performance when compared to previous method for utilizing the scratchpad memory based system (average performance improvement of 23.6% and average energy reduction of 27.1% are observed).
92

Optimization of instruction memory for embedded systems

Janapsatya, Andhi, Computer Science & Engineering, Faculty of Engineering, UNSW January 2005 (has links)
This thesis presents methodologies for improving system performance and energy consumption by optimizing the memory hierarchy performance. The processor-memory performance gap is a well-known problem that is predicted to get worse, as the performance gap between processor and memory is widening. The author describes a method to estimate the best L1 cache configuration for a given application. In addition, three methods are presented to improve the performance and reduce energy in embedded systems by optimizing the instruction memory. Performance estimation is an important procedure to assess the performance of the system and to assess the effectiveness of any applied optimizations. A cache memory performance estimation methodology is presented in this thesis. The methodology is designed to quickly and accurately estimate the performance of multiple cache memory configurations. Experimental results showed that the methodology is on average 45 times faster compared to a widely used tool (Dinero IV). The first optimization method is a software-only method, called code placement, was implemented to improve the performance of instruction cache memory. The method involves careful placement of code within memory to ensure high cache hit rate when code is brought into the cache memory. Code placement methodology aims to improve cache hit rates to improve cache memory performance. Experimental results show that by applying the code placement method, a reduction in cache miss rate by up to 71%, and energy consumption reduction of up to 63% are observed when compared to application without code placement. The second method involves a novel architecture for utilizing scratchpad memory. The scratchpad memory is designed as a replacement of the instruction cache memory. Hardware modification was designed to allow data to be written into the scratchpad memory during program execution, allowing dynamic control of the scratchpad memory content. Scratchpad memory has a faster memory access time and a lower energy consumption per access compared to cache memory; the usage of scratchpad memory aims to improve performance and lower energy consumption of systems compared to system with cache memory. Experimental results show an average energy reduction of 26.59% and an average performance improvement of 25.63% when compared to a system with cache memory. The third is an application profiling method using statistical information to identify application???s hot-spots. Application profiling is important for identifying section in the application where performance degradation might occur and/or where maximum performance gain can be obtained through optimization. The method was applied and tested on the scratchpad based system described in this thesis. Experimental results show the effectiveness of the analysis method in reducing energy and improving performance when compared to previous method for utilizing the scratchpad memory based system (average performance improvement of 23.6% and average energy reduction of 27.1% are observed).
93

Algorithms for distributed caching and aggregation

Tiwari, Mitul. January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2007. / Vita. Includes bibliographical references.
94

Power reduction techniques for memory elements /

Katrue, Srikanth. January 2007 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 2007. / Typescript. Includes bibliographical references (leaves 58-60).
95

Power reduction of MPEG video decoding for mobile multimedia systems /

Lewis, James M. January 1900 (has links)
Thesis (M.S.)--Oregon State University, 2008. / Printout. Includes bibliographical references (leaves 33-34). Also available on the World Wide Web.
96

On the use and performance of communication primitives in software controlled cache-coherent cluster architectures /

Qin, Xiaohan, January 1997 (has links)
Thesis (Ph. D.)--University of Washington, 1997. / Vita. Includes bibliographical references (leaves [117]-125).
97

Cache design and timing analysis for preemptive multi-tasking real-time uniprocessor systems

Tan, Yudong. January 2005 (has links) (PDF)
Thesis (Ph. D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2005. / Schimmel, David, Committee Member ; Meliopoulos, A. P. Sakis, Committee Member ; Mooney, Vincent, Committee Chair ; Prvulovic, Milos, Committee Member ; Yalamanchili, Sudhakar, Committee Member. Includes bibliographical references.
98

Design and implementation of configuration modules in a programmable hardware-assisted cache emulator (PHA$E) /

Chalainanont, Nirut. January 1900 (has links)
Thesis (M.S.)--Oregon State University, 2005. / Printout. Includes bibliographical references (leaves 38-40). Also available on the World Wide Web.
99

A study for reducing conflict misses in data cache

Ammari, Rami J. January 2004 (has links)
Thesis (M.S.)--Mississippi State University. Department of Electrical and Computer Engineering. / Title from title screen. Includes bibliographical references.
100

Cache architectures to improve IP lookups

Ravinder, Sunil. January 2009 (has links)
Thesis (M.Sc.)--University of Alberta, 2009. / Title from PDF file main screen (viewed on Mar. 18, 2010). A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Master of Science, Department of Computing Science, University of Alberta. Includes bibliographical references.

Page generated in 0.3668 seconds