• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 345
  • 54
  • 41
  • 39
  • 23
  • 16
  • 15
  • 13
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 745
  • 291
  • 279
  • 144
  • 100
  • 93
  • 90
  • 87
  • 79
  • 70
  • 65
  • 46
  • 44
  • 43
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Second-tier Cache Management to Support DBMS Workloads

Li, Xuhui 16 September 2011 (has links)
Enterprise Database Management Systems (DBMS) often run on computers with dedicated storage systems. Their data access requests need to go through two tiers of cache, i.e., a database bufferpool and a storage server cache, before reaching the storage media, e.g., disk platters. A tremendous amount of work has been done to improve the performance of the first-tier cache, i.e., the database bufferpool. However, the amount of work focusing on second-tier cache management to support DBMS workloads is comparably small. In this thesis we propose several novel techniques for managing second-tier caches to boost DBMS performance in terms of query throughput and query response time. The main purpose of second-tier cache management is to reduce the I/O latency endured by database query executions. This goal can be achieved by minimizing the number of reads and writes issued from second-tier caches to storage devices. The rst part of our research focuses on reducing the number of read I/Os issued by second-tier caches. We observe that DBMSs issue I/O requests for various reasons. The rationales behind these I/O requests provide useful information to second-tier caches because they can be used to estimate the temporal locality of the data blocks being requested. A second-tier cache can exploit this information when making replacement decisions. In this thesis we propose a technique to pass this information from DBMSs to second-tier caches and to use it in guiding cache replacements. The second part of this thesis focuses on reducing the number of writes issued by second-tier caches. Our work is two fold. First, we observe that although there are second-tier caches within computer systems, today's DBMS cannot take full advantage of them. For example, most commercial DBMSs use forced writes to propagate bufferpool updates to permanent storage for data durability reasons. We notice that enforcing such a practice is more conservative than necessary. Some of the writes can be issued as unforced requests and can be cached in the second-tier cache without immediate synchronization. This will give the second-tier cache opportunities to cache and consolidate multiple writes into one request. However, unfortunately, the current POSIX compliant le system interfaces provided by mainstream operating systems e.g., Unix and Windows) are not flexible enough to support such dynamic synchronization. We propose to extend such interfaces to let DBMSs take advantage of using unforced writes whenever possible. Additionally, we observe that the existing cache replacement algorithms are designed solely to maximize read cache hits (i.e., to minimize read I/Os). The purpose is to minimize the read latency, which is on the critical path of query executions. We argue that minimizing read requests is not the only objective of cache replacement. When I/O bandwidth becomes a bottleneck the objective should be to minimize the total number of I/Os, including both reads and writes, to achieve the best performance. We propose to associate a new type of replacement cost, i.e., the total number of I/Os caused by the replacement, with each cache page; and we also present a partial characterization of an optimal algorithm which minimizes the total number of I/Os generated by caches. Based on this knowledge, we extend several existing replacement algorithms, which are write-oblivious (focus only on reducing reads), to be write-aware and observe promising performance gains in the evaluations.
252

A cache framework for nomadic clients of web services

Elbashir, Kamaleldin 15 September 2009 (has links)
This research explores the problems associated with caching of SOAP Web Service request/response pairs, and presents a domain independent framework enabling transparent caching of Web Service requests for mobile clients. The framework intercepts method calls intended for the web service and proceeds by buffering and caching of the outgoing method call and the inbound responses. This enables a mobile application to seamlessly use Web Services by masking fluctuations in network conditions. This framework addresses two main issues, firstly how to enrich the WS standards to enable caching and secondly how to maintain consistency for state dependent Web Service request/response pairs.
253

SoftCache Architecture

Fryman, Joshua Bruce 19 July 2005 (has links)
Multiple trends in computer architecture are beginning to collide as process technology reaches ever smaller feature sizes. Problems with managing power, access times across a die, and increasing complexity to sustain growth are now blocking commercial products like the Pentium 4. These problems also occur in the embedded system space, albeit in a slightly different form. However, as process technology marches on, today's high-performance space is becoming tomorrow's embedded space. New techniques are needed to overcome these problems. In this thesis, we propose a novel architecture called SoftCache to address these emerging issues for embedded systems. We reduce the on-die memory controller infrastructure which reduces both power and space requirements, using the ubiquitous network device arena as a proving ground of viability. In addition, the SoftCache achieves further power and area savings by converting on-die cache structures into directly addressable SRAM and reducing or eliminating the external DRAM. To avoid the burden of programming complexity this approach presents to the application developer, we provide a transparent client-server dynamic binary translation system that runs arbitrary ELF executables on a stripped-down embedded target. One drawback to such a scheme lies in the overhead of additional instructions required to effect cache behavior, particularly with respect to data caching. Another drawback is the power use when fetching from remote memory over the network. The SoftCache comprises a dynamic client-server translation system on simplified hardware, targeted at Intel XScale client devices controlled from servers over the network. Reliance upon a network server as a ``backing store' introduces new levels of complexity, yet also allows for more efficient use of local space. The explicitly software managed aspects create a cache of variable line size, full associativity, and high flexibility. This thesis explores these particular issues, while approaching everything from the perspective of feasibility and actual architectural changes.
254

Design and verification of an ARM10-like Processor and its System Integration

Lin, Chun-Shou 07 February 2012 (has links)
With the advanced of the technique, we can design more IP in the same area space chip. The embedded system has more powerful about its application. We need to have a more efficient core processor to support the whole embedded system in complex system environment. The main purpose of this paper is increased the calculated speed, memory management and debugging for SYS32TME III, which is designed by our lab as an ARM10 like processor. We integrate the cache/MMU and EICE( Embedded in-circuit emulator ) into the embedded processor core. Using the cache/MMU, we can not only speed up the processor which access external memory time but also use the virtual address for Operating System. In order to keep the correctness of the system and speed up the system integration time, we use five functional (cache off, cache on and MMU off with cache hit/miss, cache on and MMU on with cach hit/cache miss and TLB hit/cache miss and TLB miss) tests to verify the cache/MMU and six coprocessor instructions (LDC, MCR, MCRR, MRC, MRRC, STC ) to verify the EICE. After that, we also use the regression test about the microprocessor, cache/MMU and EICE system integration. In the end, we turned the performance about the integrated cache/MMU and EICE, so that we can support an 200MHz ARM 10-like processor by 0.18£gm.
255

Implementation & Performance Analysis of MP3 Player over ARM9 Platform

Huang, Zih-Cheng 06 July 2005 (has links)
This paper has mentioned the detailed process in which MP3 has been porting so as to being executed on the plate of ARM9 and working device of AACI¡BMMU¡BI-Cache¡BD-Cache¡BDMA¡BI-TCM¡BD-TCM on the developing board of Versatile VPB926EJS; the device has reduced clock Rate of ARM926EJ CPU. On the basis of lowest Clock Rate, the software of MP3 player could be executed smoothly with the little hardware resource.
256

A Prefetching Method For Interactive Web Gis Applications

Yesilmurat, Serdar 01 March 2010 (has links) (PDF)
A Web GIS system has a major issue of serving the map data to the client applications. Since most of the GIS services provide their geospatial data as basic image formats like PNG and JPEG, constructing those images and transferring them over the internet are costly operations. To enhance this inefficient process, various approaches are offered. Caching the responses of the requests on the client side is the most commonly implemented solution. However, this method is not adequate by itself. Besides caching the responses, predicting the next possible requests of the client and updating the cache with the responses for those requests provide a remarkable performance improvement. This procedure is called &ldquo / prefetching&rdquo / . Via prefetching, caching mechanisms can be used more effectively and efficiently. This study proposes a prefetching algorithm called Retrospective Adaptive Prefetch (RAP). The algorithm is constructed over a heuristic method that takes the former actions of the user into consideration. This method reduces the user-perceived response time and improves users&rsquo / navigation efficiency. The caching mechanism developed takes the memory capacity of the client machine into consideration to adjust the cache capacity by default. Otherwise, cache size can be configured manually. RAP is compared with 4 other methods. According to the experiments, this study shows that RAP provides better performance enhancements than the other compared methods.
257

An energy efficient TCAM enhanced cache architecture

Surprise, Jason Mathew 29 August 2005 (has links)
Microprocessors are used in a variety of systems ranging from high-performance super computers running scientific applications to battery powered cell phones performing realtime tasks. Due to the large disparity between processor clock speed and main memory access time, most modern processors include several caches, which consume more than half of the total chip area and power budget. As the performance gap between processors and memory has increased, the trend has been to increase the size of the on-chip caches. However, increasing the cache size also increases its access time and energy consumptions. This growing power dissipation problem is making traditional cooling and packaging techniques less effective thus requiring cache designers to focus more on architectural level energy efficiency than performance alone. The goal of this thesis is to propose a new cache architecture and to evaluate its efficiency in terms of miss rate, system performance, energy consumption, and area overhead. The proposed architecture employs the use of a few Ternary-CAM (TCAM) cells in the tag array to enable dynamic compression of tag entries containing contiguous values. By dynamically compressing tag entries, the number of entries in the tag array can be reduced by 2N, where N is the number of tag bits that can be compressed. The architecture described in this thesis is applicable to any cache structure that uses Content Addressable Memory (CAM) cells to store tag bits. To evaluate the effectiveness of the TCAM Enhanced Cache Architecture for a wide scope of applications, two case studies were performed ?? the L2 Data-TLB (DTLB) of a high-performance processor and the L1 instruction and data caches of a low-power embedded processor. Results indicate that a L2 DTLB implementing 3-bit tag compression can achieve 93% of the performance of a conventional L2 DTLB of the same size while reducing the on-chip energy consumption by 74% and the total area by 50%. Similarly, an embedded processor cache implementing 2-bit tag compression achieves 99% of the performance of a conventional cache while reducing the on-chip energy consumption by 33% and the total area by 10%.
258

Semantic caching for XML queries

Chen, Li. January 2004 (has links)
Thesis (Ph. D.)--Worcester Polytechnic Institute. / Keywords: Replacement strategy; Query rewriting; Query containment; Semantic caching; Query; XML. Includes bibliographical references (p. 210-222).
259

Deterministic object management in large distributed systems

Mikhailov, Mikhail. January 2003 (has links)
Thesis (Ph. D.)--Worcester Polytechnic Institute. / Keywords: server invalidation; distributed object management; object relationships; web caching; change characteristics; object composition; cache consistency. Includes bibliographical references (p. 156-163).
260

Application programmer directed data prefetching

Silva, Malik. January 2001 (has links)
Thesis (M. Sc.)--York University, 2001. Graduate Programme in Computer Science. / Typescript. Includes bibliographical references (leaves 102-104). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://wwwlib.umi.com/cr/yorku/fullcit?pMQ66406.

Page generated in 0.039 seconds