• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 128
  • 18
  • 18
  • 6
  • 3
  • 1
  • 1
  • Tagged with
  • 190
  • 190
  • 121
  • 65
  • 48
  • 44
  • 35
  • 27
  • 20
  • 18
  • 16
  • 13
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Semantic caching for XML queries

Chen, Li. January 2004 (has links)
Thesis (Ph. D.)--Worcester Polytechnic Institute. / Keywords: Replacement strategy; Query rewriting; Query containment; Semantic caching; Query; XML. Includes bibliographical references (p. 210-222).
82

Deterministic object management in large distributed systems

Mikhailov, Mikhail. January 2003 (has links)
Thesis (Ph. D.)--Worcester Polytechnic Institute. / Keywords: server invalidation; distributed object management; object relationships; web caching; change characteristics; object composition; cache consistency. Includes bibliographical references (p. 156-163).
83

Application programmer directed data prefetching

Silva, Malik. January 2001 (has links)
Thesis (M. Sc.)--York University, 2001. Graduate Programme in Computer Science. / Typescript. Includes bibliographical references (leaves 102-104). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://wwwlib.umi.com/cr/yorku/fullcit?pMQ66406.
84

Sharing and caching characteristics of Internet content /

Wolman, Alastair, January 2002 (has links)
Thesis (Ph. D.)--University of Washington, 2002. / Vita. Includes bibliographical references (p. 137-147).
85

Cache design for low power and yield enhancement

Mohammad, Baker Shehadah 13 September 2012 (has links)
One of the major limiters to computer systems and systems on chip (SOC) designs is accessing the main memory, which is typically two orders of magnitude slower than the processor. To bridge this gap, modern processors already devote more than half of the on-chip transistors to the last-level cache. Caches have negative impact on area, power, and yield. This research goal is to design caches that operate at lower voltages while enhancing yield. Our strategy is to improve the static noise margin (SNM) and the writability of the conventional six-transistor SRAM cell by reducing the effect of parametric variations on the cell. This is done using a novel circuit that reduces the voltage swing on the word line during read operations and reduces the memory supply voltage during write operations. The proposed circuit increases the SRAM’s SNM and write margin using a single voltage supply that has minimal impacts on chip area, complexity, and timing. A test chip with an 8-kilobyte SRAM block manufactured in 45- nm technology is used to verify the practicality of the contribution and demonstrate the effectiveness of the new circuit’s implementation. Cache organization is one of the most important factors that affect cache design complexity, performance, area, and power. The main architectural choice for caches is whether to implement the tag array using a standard SRAM or using a content addressable memory (CAM). The choice made has far-reaching consequences on several aspects of the cache design, and in particular on power consumption. Our contribution in this area is an in-depth study of the complex tradeoffs of area, timing, power, and design complexity between an SRAM-based tag and a CAM-based one. Our results indicate that an SRAM-based tag design often provides a better overall design point and is superior with respect to energy, especially for interleaved multi-threading processors. Being able to test and screen chips is a key factor in achieving high yield. Most industry standard CAD tools used to analyze fault coverage and generate test vectors require gate level models. However, since caches are typically designed using a transistor-level flow, there is a need for an abstraction step to generate the gate models, which must be equivalent to the actual design (transistor level). The third contribution of the research is a framework to verify that the gate level representation of custom designs is equivalent to the transistor-level design. / text
86

A pervasive information framework based on semantic routing and cooperative caching

Chen, Weisong, 陳偉松 January 2004 (has links)
published_or_final_version / abstract / toc / Computer Science and Information Systems / Master / Master of Philosophy
87

On channel adaptive wireless cache invalidation and game theoretic power a ware wireless data access

Yeung, Kai-ho, Mark., 楊啟豪. January 2004 (has links)
published_or_final_version / abstract / toc / Electrical and Electronic Engineering / Master / Master of Philosophy
88

Extending caching for two applications : disseminating live data and accessing data from disks

Vellanki, Vivekanand 12 1900 (has links)
No description available.
89

Optimization of instruction memory for embedded systems

Janapsatya, Andhi, Computer Science & Engineering, Faculty of Engineering, UNSW January 2005 (has links)
This thesis presents methodologies for improving system performance and energy consumption by optimizing the memory hierarchy performance. The processor-memory performance gap is a well-known problem that is predicted to get worse, as the performance gap between processor and memory is widening. The author describes a method to estimate the best L1 cache configuration for a given application. In addition, three methods are presented to improve the performance and reduce energy in embedded systems by optimizing the instruction memory. Performance estimation is an important procedure to assess the performance of the system and to assess the effectiveness of any applied optimizations. A cache memory performance estimation methodology is presented in this thesis. The methodology is designed to quickly and accurately estimate the performance of multiple cache memory configurations. Experimental results showed that the methodology is on average 45 times faster compared to a widely used tool (Dinero IV). The first optimization method is a software-only method, called code placement, was implemented to improve the performance of instruction cache memory. The method involves careful placement of code within memory to ensure high cache hit rate when code is brought into the cache memory. Code placement methodology aims to improve cache hit rates to improve cache memory performance. Experimental results show that by applying the code placement method, a reduction in cache miss rate by up to 71%, and energy consumption reduction of up to 63% are observed when compared to application without code placement. The second method involves a novel architecture for utilizing scratchpad memory. The scratchpad memory is designed as a replacement of the instruction cache memory. Hardware modification was designed to allow data to be written into the scratchpad memory during program execution, allowing dynamic control of the scratchpad memory content. Scratchpad memory has a faster memory access time and a lower energy consumption per access compared to cache memory; the usage of scratchpad memory aims to improve performance and lower energy consumption of systems compared to system with cache memory. Experimental results show an average energy reduction of 26.59% and an average performance improvement of 25.63% when compared to a system with cache memory. The third is an application profiling method using statistical information to identify application???s hot-spots. Application profiling is important for identifying section in the application where performance degradation might occur and/or where maximum performance gain can be obtained through optimization. The method was applied and tested on the scratchpad based system described in this thesis. Experimental results show the effectiveness of the analysis method in reducing energy and improving performance when compared to previous method for utilizing the scratchpad memory based system (average performance improvement of 23.6% and average energy reduction of 27.1% are observed).
90

Optimization of instruction memory for embedded systems

Janapsatya, Andhi, Computer Science & Engineering, Faculty of Engineering, UNSW January 2005 (has links)
This thesis presents methodologies for improving system performance and energy consumption by optimizing the memory hierarchy performance. The processor-memory performance gap is a well-known problem that is predicted to get worse, as the performance gap between processor and memory is widening. The author describes a method to estimate the best L1 cache configuration for a given application. In addition, three methods are presented to improve the performance and reduce energy in embedded systems by optimizing the instruction memory. Performance estimation is an important procedure to assess the performance of the system and to assess the effectiveness of any applied optimizations. A cache memory performance estimation methodology is presented in this thesis. The methodology is designed to quickly and accurately estimate the performance of multiple cache memory configurations. Experimental results showed that the methodology is on average 45 times faster compared to a widely used tool (Dinero IV). The first optimization method is a software-only method, called code placement, was implemented to improve the performance of instruction cache memory. The method involves careful placement of code within memory to ensure high cache hit rate when code is brought into the cache memory. Code placement methodology aims to improve cache hit rates to improve cache memory performance. Experimental results show that by applying the code placement method, a reduction in cache miss rate by up to 71%, and energy consumption reduction of up to 63% are observed when compared to application without code placement. The second method involves a novel architecture for utilizing scratchpad memory. The scratchpad memory is designed as a replacement of the instruction cache memory. Hardware modification was designed to allow data to be written into the scratchpad memory during program execution, allowing dynamic control of the scratchpad memory content. Scratchpad memory has a faster memory access time and a lower energy consumption per access compared to cache memory; the usage of scratchpad memory aims to improve performance and lower energy consumption of systems compared to system with cache memory. Experimental results show an average energy reduction of 26.59% and an average performance improvement of 25.63% when compared to a system with cache memory. The third is an application profiling method using statistical information to identify application???s hot-spots. Application profiling is important for identifying section in the application where performance degradation might occur and/or where maximum performance gain can be obtained through optimization. The method was applied and tested on the scratchpad based system described in this thesis. Experimental results show the effectiveness of the analysis method in reducing energy and improving performance when compared to previous method for utilizing the scratchpad memory based system (average performance improvement of 23.6% and average energy reduction of 27.1% are observed).

Page generated in 0.0587 seconds