• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 271
  • 78
  • 36
  • 29
  • 28
  • 23
  • 22
  • 8
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 610
  • 84
  • 64
  • 58
  • 53
  • 52
  • 42
  • 41
  • 35
  • 33
  • 33
  • 32
  • 28
  • 27
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Second-tier Cache Management to Support DBMS Workloads

Li, Xuhui 16 September 2011 (has links)
Enterprise Database Management Systems (DBMS) often run on computers with dedicated storage systems. Their data access requests need to go through two tiers of cache, i.e., a database bufferpool and a storage server cache, before reaching the storage media, e.g., disk platters. A tremendous amount of work has been done to improve the performance of the first-tier cache, i.e., the database bufferpool. However, the amount of work focusing on second-tier cache management to support DBMS workloads is comparably small. In this thesis we propose several novel techniques for managing second-tier caches to boost DBMS performance in terms of query throughput and query response time. The main purpose of second-tier cache management is to reduce the I/O latency endured by database query executions. This goal can be achieved by minimizing the number of reads and writes issued from second-tier caches to storage devices. The rst part of our research focuses on reducing the number of read I/Os issued by second-tier caches. We observe that DBMSs issue I/O requests for various reasons. The rationales behind these I/O requests provide useful information to second-tier caches because they can be used to estimate the temporal locality of the data blocks being requested. A second-tier cache can exploit this information when making replacement decisions. In this thesis we propose a technique to pass this information from DBMSs to second-tier caches and to use it in guiding cache replacements. The second part of this thesis focuses on reducing the number of writes issued by second-tier caches. Our work is two fold. First, we observe that although there are second-tier caches within computer systems, today's DBMS cannot take full advantage of them. For example, most commercial DBMSs use forced writes to propagate bufferpool updates to permanent storage for data durability reasons. We notice that enforcing such a practice is more conservative than necessary. Some of the writes can be issued as unforced requests and can be cached in the second-tier cache without immediate synchronization. This will give the second-tier cache opportunities to cache and consolidate multiple writes into one request. However, unfortunately, the current POSIX compliant le system interfaces provided by mainstream operating systems e.g., Unix and Windows) are not flexible enough to support such dynamic synchronization. We propose to extend such interfaces to let DBMSs take advantage of using unforced writes whenever possible. Additionally, we observe that the existing cache replacement algorithms are designed solely to maximize read cache hits (i.e., to minimize read I/Os). The purpose is to minimize the read latency, which is on the critical path of query executions. We argue that minimizing read requests is not the only objective of cache replacement. When I/O bandwidth becomes a bottleneck the objective should be to minimize the total number of I/Os, including both reads and writes, to achieve the best performance. We propose to associate a new type of replacement cost, i.e., the total number of I/Os caused by the replacement, with each cache page; and we also present a partial characterization of an optimal algorithm which minimizes the total number of I/Os generated by caches. Based on this knowledge, we extend several existing replacement algorithms, which are write-oblivious (focus only on reducing reads), to be write-aware and observe promising performance gains in the evaluations.
122

Analytical Layer Planning for Nanometer VLSI Designs

Chang, Chi-Yu 2012 August 1900 (has links)
In this thesis, we proposed an intermediate sub-process between placement and routing stage in physical design. The algorithm is for generating layer guidance for post-placement optimization technique especially buffer insertion. This issue becomes critical in nowadays VLSI chip design due to the factor of timing, congestion, and increasingly non-uniform parasitic among different metal layers. Besides, as a step before routing, this layer planning algorithm accounts for routability by considering minimized overlap area between different nets. Moreover, layer directive information which is a crucial concern in industrial design is also considered in the algorithm. The core problem is formulated as nonlinear programming problem which is composed of objective function and constraints. The problem is further solved by conjugate gradient method. The whole algorithm is implemented by C++ under Linux operating system and tested on ISPD2008 Global Routing Contest Benchmarks. The experiment results are shown in the end of this thesis and confirm the effectiveness of our approach especially in routability aspect.
123

Quantitative microbial risk assessment: a catchment management tool to delineate buffer distances for on-site sewage treatment and disposal systems in Sydney??s drinking water catchments

Charles, Katrina, Civil & Environmental Engineering, Faculty of Engineering, UNSW January 2009 (has links)
On-site sewage systems, such as septic tank-absorption trenches, are used by approximately 20 000 people who live within the catchments that supply Sydney??s drinking water. These systems discharge sewage, treated to varying degrees depending on the system type and level of maintenance, to the environment. This can result in contamination of drinking water supplies if systems are not designed or managed appropriately. The aim of the project was to develop a methodology to define appropriate buffer distances between on-site sewage systems and waterways in Sydney??s drinking water catchments, to ensure the protection of drinking water quality. Specific objectives included: identifying the current status of on-site sewage management; assessing the effluent quality and treatment performance of septic tanks, aerated wastewater treatment systems (AWTS) with disinfection and an amended material sand mound; and development of an appropriate methodology for delineating buffer distances and assessing development applications. Viruses were used as a focus for delineating the buffer distances due to their mobility and robustness in the environment, and the potential health consequences of their presence in drinking water. A Quantitative Microbial Risk Assessment (QMRA) model was developed to calculate the cumulative impact of the on-site sewage systems in the Warragamba catchment based on data from literature and experiments, with consideration of virus loads from sewage treatment plants within the catchments. The model enabled consideration of what was a tolerable impact in terms of the resulting infections within the community. The QMRA the tolerable loads of viruses from the Warragamba catchment were 108 viruses per year in raw water and 104 viruses per year in treated water. A log reduction method was developed to facilitate individual site development assessments. This method was compared to other management approaches to development assessment: fixed minimum buffer distances of 100m, reducing failure rates to zero, and the use of a preferred system. Each of these methods had a limit for how much they could reduce virus loads to the catchment due to either failure or short buffer distances at some sites. While the log reduction method is limited by the failure rates, the method provides a quantitative measure of risk by which maintenance inspections can be prioritised.
124

Preventing buffer overflow attacks using binary of split stack (BoSS) /

Doshi, Parag Nileshbhai, January 2007 (has links)
Thesis (M.S.)--University of Texas at Dallas, 2007. / Includes vita. Includes bibliographical references (leaves 42-43)
125

The RIT IEEE-488 Buffer design /

Connor, John. January 1992 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 1992. / Typescript. Includes bibliographical references.
126

Evaluation of barriers to black-tailed prairie dog (Cynomys ludovicianus) colony expansion, Bad River ranches, South Dakota /

Gray, Marcus B. January 2009 (has links) (PDF)
Thesis (M.S.)--Wildlife and Fisheries Sciences Dept., South Dakota State University, 2009. / Includes bibliographical references. Also available via the World Wide Web.
127

A gasp of fresh air a high speed distributed FIFO scheme for managing interconnect parasitics /

Rydberg, Ray Robert, January 2005 (has links) (PDF)
Thesis (M.S.)--Washington State University. / Includes bibliographical references.
128

The dogma of the 30 meter riparian buffer : the case of the boreal toad (Bufo boreas boreas) /

Goates, Michael Calvin, January 2006 (has links) (PDF)
Thesis (M.S.)--Brigham Young University. Dept. of Integrative Biology, 2006. / Includes bibliographical references (p. 27-34).
129

The response of stream ecosystems to riparian buffer width and vegetative composition in exotic plantation forests : a thesis submitted in partial fulfillment for the degree of Master of Science in Environmental Science at the University of Canterbury /

Eivers, Rebecca S. January 1900 (has links)
Thesis (M. Sc.)--University of Canterbury, 2006. / Typescript (photocopy). "June 2006." Includes bibliographical references. Also available via the World Wide Web.
130

Design and development of an interface board between a minicomputer and a CDC printer with a memory buffer and a programmable vertical format throw

Orman, PTF January 1988 (has links)
Thesis (Masters Diploma (Technology)-- Cape Technikon, Cape Town,1988 / Brown Davis and McCorquodale is one of the major suppliers of cheques to the banking industry. To produce these cheques they use a number of different print systems, one of which comprises of a minicomputer, an industry standard tape deck and two printers, a Diablo daisywheel and a Control Data Corporation (CDC) printer which was extensively modified to cater for the requirements of the cheque printing industry. The CDC printer is used to print the code line on the cheques using magnetic ink. After each line is printed the computer sends a form feed command which causes the printer to throw paper. This throw is controlled by a paper tape, known as a Vertical Format Unit tape, or rather a VFU tape. This tape has holes punched into it at specific places which determine the amount of paper throw also known as vertical feed. The holes are sensed by brushes which are pulled up to 5 volt when they pass over a hole and touch a roller connected to the 5 volt line. This system, being of an electro-mechanical nature, is prone to faults and causes much down time due to mechanical wear on the brushes and dirt on the roller. This means that the brushes have to be adjusted and therefore also means that the timing has to be readjusted each time. The timing relationships are discussed in Section 2.B

Page generated in 0.031 seconds