• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 8
  • 7
  • 6
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 106
  • 27
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

PERFORMANCE AND ENDURANCE CONTROL IN EMERGING STORAGE TECHNOLOGIES

Roy, Tanaya, 0000-0003-4545-9299 January 2021 (has links)
The current diverse and wide range of computing moves towards the cloud and de- mands high performance in low latency and high throughput. Facebook reported that 3.3 billion people monthly and 2.6 billion people daily use their data centers over the network. Many emerging user-facing applications require strict control over the stor- age latency’s tail to provide a quality user experience. The low-latency requirement triggers the ongoing replacement of hard drives (HDDs) by solid-state drives (SSDs) in the enterprise, enabling much higher performance and lower end-to-end storage latencies. It becomes more challenging to ensure low latency while maintaining the device’s endurance ratings. We address this challenge in the following ways: 1. Enhance the overall storage system’s performance and maintain the SSD endurance using emerging Non-volatile memory (ENVM) technology. 2. Implement deterministic la- tency in the storage path for latency-sensitive applications. 3. Provide low-latency and differentiated services when write-intensive workloads are present in a shared environment. We have proposed the performance and endurance-centric mechanisms to evaluate the tradeoffs between performance and endurance. In the first approach, our goal is to achieve low storage latency and a long lifetime of the SSD simultane- ously, even for a write-heavy workload. Incorporating a significantly smaller amount of ENVM with SSD as a cache helps to achieve the said goal.SSDs using the NVMe (Non-Volatile Memory Express) interface can achieve low latency as the interface provides several advanced features. The second approach has iii explored such features to control the storage tail latency in a distributed environment. The ”predictable latency mode (PLM)” advanced feature helps to achieve determinis- tic storage latency. SSDs need to perform many background management operations to deal with the underlying flash technology traits, the most time-consuming ones be- ing garbage collection and wear leveling. The latency requirement of latency-sensitive applications violates when the I/O requests fall behind such management activities. PLM leverages SSD controllers to perform the background operations during a win- dow, called a ”non-deterministic window (NDWin)”. Whereas during the ”determin- istic window (DTWin)”, applications will experience no such operations. We have extended this feature in the distributed environment and showed how it helps achieve low storage latency when the proposed ”PLM coordinator (PLMC)” is incorporated. In a shared environment with write-intensive workloads present, result in latency peak for Read IO. Moreover, it is required to provide differentiated services with multiple QoS classes present in the workload mixture. We have extended the PLM concept on hybrid storage to realize the deterministic latency for tight tail-controlled appli- cations and assure differentiated services among multiple QoS applications. Since nearly all of the storage access in a data center is over the network, an end-to-end path consists of three components: The host component, Network component, and Storage Component. For latency-sensitive applications, the overall tail latency needs to consider all these components. In a NAS (Network Attached Storage) architecture, it is worth studying the QoS class aware services present at the different components to provide an overall low request-response latency. Therefore, it helps future research to embrace the gaps that have not been considered yet. / Computer and Information Science
12

LRU-SAI: the use of LRU algorithm with separation of active and inactive pages to improve solid state storage device performance

Yu, Jingyi 06 December 2010 (has links)
No description available.
13

Data Protection and Data Elimination

Budd, Chris 10 1900 (has links)
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV / Data security is becoming increasingly important in all areas of storage. The news services frequently have stories about lost or stolen storage devices and the panic it causes. Data security in an SSD usually involves two components: data protection and data elimination. Data protection includes passwords to protect against unauthorized access and encryption to protect against recovering data from the flash chips. Data elimination includes erasing the encryption key and erasing the flash. Telemetry applications frequently add requirements such as write protection, external erase triggers, and overwriting the flash after the erase. This presentation will review these data security features.
14

Matching of Dental X-rays for Human Forensic Identification

Omanovic, Maja January 2006 (has links)
Dental records have been widely used as tools in forensic identification. With the vast volume of cases that need to be investigated by forensic odontologists, a move towards a computer-aided dental identification system is necessary. We propose a computer-aided framework for efficient matching of dental x-rays for human identification purposes. Given a dental x-ray with a marked region of interest (ROI), we search the database of x-rays (presumed to be taken from known individuals) to retrieve a closest match. In this work we use a slightly extended Weighted Sum of Squared Differences (SSD) cost function to express the degree of similarity/overlap between two dental radiographs. Unlike other iterative Least Squares methods that use local information for gradient-based optimization, our method finds the globally optimal translation. In 90% of the identification trials, our method ranked the correct match in the top 10% using a database of 571 images. Experiments indicate that matching dental records using the extended SSD cost function is a viable method for human dental identification.
15

Matching of Dental X-rays for Human Forensic Identification

Omanovic, Maja January 2006 (has links)
Dental records have been widely used as tools in forensic identification. With the vast volume of cases that need to be investigated by forensic odontologists, a move towards a computer-aided dental identification system is necessary. We propose a computer-aided framework for efficient matching of dental x-rays for human identification purposes. Given a dental x-ray with a marked region of interest (ROI), we search the database of x-rays (presumed to be taken from known individuals) to retrieve a closest match. In this work we use a slightly extended Weighted Sum of Squared Differences (SSD) cost function to express the degree of similarity/overlap between two dental radiographs. Unlike other iterative Least Squares methods that use local information for gradient-based optimization, our method finds the globally optimal translation. In 90% of the identification trials, our method ranked the correct match in the top 10% using a database of 571 images. Experiments indicate that matching dental records using the extended SSD cost function is a viable method for human dental identification.
16

Visual Tracking for a Moving Object Using Optical Flow Technique

Ching, Ya-Hsin 25 June 2003 (has links)
When an object makes a motion of continuous variation, its projection on a plane brings a succession of image and the motion between the video camera and the object causes displacement of image pixels. The relative motion of the displacement is called optical flow. The advantage of using the optical flow approach is that it is not required to know characteristics of the object and the environment at that time. So this method is suitable for tracking problems in unknown environment. It has been indicated that the optical flow based on the whole image cannot always be correct enough for control purpose where motion or feature occur. This thesis first uses digital image technique to subtract two continuous images, and extract the region where the motion actually occurs. Then, optical flow is calculated based on image information in this area. In this way, it cannot only raise the tracking speed, but also reduce the effect of the incorrect optical flow value. As a result, both tracking accuracy and speed can be greatly improved.
17

Optimizing Hierarchical Storage Management For Database System

Liu, Xin 22 May 2014 (has links)
Caching is a classical but effective way to improve system performance. To improve system performance, servers, such as database servers and storage servers, contain significant amounts of memory that act as a fast cache. Meanwhile, as new storage devices such as flash-based solid state drives (SSDs) are added to storage systems over time, using the memory cache is not the only way to improve system performance. In this thesis, we address the problems of how to manage the cache of a storage server and how to utilize the SSD in a hybrid storage system. Traditional caching policies are known to perform poorly for storage server caches. One promising approach to solving this problem is to use hints from the storage clients to manage the storage server cache. Previous hinting approaches are ad hoc, in that a predefined reaction to specific types of hints is hard-coded into the caching policy. With ad hoc approaches, it is difficult to ensure that the best hints are being used, and it is difficult to accommodate multiple types of hints and multiple client applications. In this thesis, we propose CLient-Informed Caching (CLIC), a generic hint-based technique for managing storage server caches. CLIC automatically interprets hints generated by storage clients and translates them into a server caching policy. It does this without explicit knowledge of the application-specific hint semantics. We demonstrate using trace-based simulation of database workloads that CLIC outperforms hint-oblivious and state-of-the-art hint-aware caching policies. We also demonstrate that the space required to track and interpret hints is small. SSDs are becoming a part of the storage system. Adding SSD to a storage system not only raises the question of how to manage the SSD, but also raises the question of whether current buffer pool algorithms will still work effectively. We are interested in the use of hybrid storage systems, consisting of SSDs and hard disk drives (HDD), for database management. We present cost-aware replacement algorithms for both the DBMS buffer pool and the SSD. These algorithms are aware of the different I/O performance of HDD and SSD. In such a hybrid storage system, the physical access pattern to the SSD depends on the management of the DBMS buffer pool. We studied the impact of the buffer pool caching policies on the access patterns of the SSD. Based on these studies, we designed a caching policy to effectively manage the SSD. We implemented these algorithms in MySQL's InnoDB storage engine and used the TPC-C workload to demonstrate that these cost-aware algorithms outperform previous algorithms.
18

USING MLC FLASH TO REDUCE SYSTEM COST IN INDUSTRIAL APPLICATIONS

Budd, Chris 10 1900 (has links)
Storage devices based on Multi-Level Cell (MLC) NAND flash can be found in almost all computer systems except rugged, industrial systems; even though MLC is less expensive and more dense than devices based on standard Single-Level Cell (SLC) NAND flash, MLC’s lower write endurance and lower retention has led system designers to avoid using it. This avoidance is unnecessary in many applications which will never come close to the endurance limits. Furthermore, new processes are leading to storage devices with higher write endurance. System designers should review the specific use-model for their systems and can select MLC-based storage devices when warranted. The result is lower system costs without worry of data loss due to write endurance.
19

HOW TO MAKE A RUGGEDIZED SSD

Budd, Chris 11 1900 (has links)
SSDs are now commonplace in all types of computing from consumer laptops to enterprise storage systems. However, most of those SSDs would not survive in environments with extreme temperatures or high shock and vibration such as found in embedded and military systems. The problems in this space are more than just mechanical; they involve all aspects of the design including electrical and even firmware. A combination of all three engineering disciplines is needed to provide a robust ruggedized SSD product.
20

Metal-oxide-based electronic devices

Jin, Jidong January 2013 (has links)
Metal oxides exhibit a wide range of chemical and electronic properties, making them an extremely interesting subject for numerous applications in modern electronics. The primary goal of this research is to develop metal-oxide-based electronic devices, including thin-film transistors (TFTs), resistance random-access memory (RRAM) and planar nano-devices. This research requires different processing techniques, novel device design concepts and optimisation of materials and devices. The first experiments were carried out to optimise the properties of zinc oxide (ZnO) semiconductors, in particular the carrier concentration, which determines the threshold voltage of the TFTs. Thermal annealing is one common method to affect carrier concentration and most work in the literature reports performing this process in a single-gas environment. In this work, however, annealing was carried out in a combination of air and nitrogen, and it was found that the threshold voltage could be tuned over a wide range of pre-determined values.Further experiments were undertaken to enhance the carrier mobility of ZnO TFTs, which is the most important material quality parameter. By optimising deposition conditions and incorporating a high-k gate dielectric layer, the devices showed saturation mobility values over 50 cm2/Vs at a low operating voltage of 4 V. This is, to our knowledge, one of the highest field-effect mobility values achieved in ZnO-based TFTs by room temperature sputtering. As an important type of metal-oxide-based novel memory devices, which have been studied intensively in the last few years, RRAM devices were also explored. New materials, such as tin oxide (SnOx), were tested, exhibiting bipolar-switching operations and a relatively large resistance ratio. As a novel process variation, anodisation was performed, which yielded less impressive results than SnOx, but with a potential for ultra-low-cost manufacturing. Finally, novel planar nano-devices were explored, which have much simpler structures than conventional multi-layered transistors and diodes. Three types of ZnO-based nano-devices (a side-gated transistor, a self-switching diode and a planar inverter) were fabricated using both e-beam lithography and chemical wet etching. After optimisation of the challenging wet etching procedure at nanometre scale, ZnO nano-devices with good reproducibility and reliability have been demonstrated.

Page generated in 0.0259 seconds