• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

PERFORMANCE AND ENDURANCE CONTROL IN EMERGING STORAGE TECHNOLOGIES

Roy, Tanaya, 0000-0003-4545-9299 January 2021 (has links)
The current diverse and wide range of computing moves towards the cloud and de- mands high performance in low latency and high throughput. Facebook reported that 3.3 billion people monthly and 2.6 billion people daily use their data centers over the network. Many emerging user-facing applications require strict control over the stor- age latency’s tail to provide a quality user experience. The low-latency requirement triggers the ongoing replacement of hard drives (HDDs) by solid-state drives (SSDs) in the enterprise, enabling much higher performance and lower end-to-end storage latencies. It becomes more challenging to ensure low latency while maintaining the device’s endurance ratings. We address this challenge in the following ways: 1. Enhance the overall storage system’s performance and maintain the SSD endurance using emerging Non-volatile memory (ENVM) technology. 2. Implement deterministic la- tency in the storage path for latency-sensitive applications. 3. Provide low-latency and differentiated services when write-intensive workloads are present in a shared environment. We have proposed the performance and endurance-centric mechanisms to evaluate the tradeoffs between performance and endurance. In the first approach, our goal is to achieve low storage latency and a long lifetime of the SSD simultane- ously, even for a write-heavy workload. Incorporating a significantly smaller amount of ENVM with SSD as a cache helps to achieve the said goal.SSDs using the NVMe (Non-Volatile Memory Express) interface can achieve low latency as the interface provides several advanced features. The second approach has iii explored such features to control the storage tail latency in a distributed environment. The ”predictable latency mode (PLM)” advanced feature helps to achieve determinis- tic storage latency. SSDs need to perform many background management operations to deal with the underlying flash technology traits, the most time-consuming ones be- ing garbage collection and wear leveling. The latency requirement of latency-sensitive applications violates when the I/O requests fall behind such management activities. PLM leverages SSD controllers to perform the background operations during a win- dow, called a ”non-deterministic window (NDWin)”. Whereas during the ”determin- istic window (DTWin)”, applications will experience no such operations. We have extended this feature in the distributed environment and showed how it helps achieve low storage latency when the proposed ”PLM coordinator (PLMC)” is incorporated. In a shared environment with write-intensive workloads present, result in latency peak for Read IO. Moreover, it is required to provide differentiated services with multiple QoS classes present in the workload mixture. We have extended the PLM concept on hybrid storage to realize the deterministic latency for tight tail-controlled appli- cations and assure differentiated services among multiple QoS applications. Since nearly all of the storage access in a data center is over the network, an end-to-end path consists of three components: The host component, Network component, and Storage Component. For latency-sensitive applications, the overall tail latency needs to consider all these components. In a NAS (Network Attached Storage) architecture, it is worth studying the QoS class aware services present at the different components to provide an overall low request-response latency. Therefore, it helps future research to embrace the gaps that have not been considered yet. / Computer and Information Science
2

PERFORMANCE AND ENDURANCE CONTROL IN EMERGING STORAGE TECHNOLOGIES

Roy, Tanaya, 0000-0003-4545-9299 January 2021 (has links)
The current diverse and wide range of computing moves towards the cloud and de- mands high performance in low latency and high throughput. Facebook reported that 3.3 billion people monthly and 2.6 billion people daily use their data centers over the network. Many emerging user-facing applications require strict control over the stor- age latency’s tail to provide a quality user experience. The low-latency requirement triggers the ongoing replacement of hard drives (HDDs) by solid-state drives (SSDs) in the enterprise, enabling much higher performance and lower end-to-end storage latencies. It becomes more challenging to ensure low latency while maintaining the device’s endurance ratings. We address this challenge in the following ways: 1. Enhance the overall storage system’s performance and maintain the SSD endurance using emerging Non-volatile memory (ENVM) technology. 2. Implement deterministic la- tency in the storage path for latency-sensitive applications. 3. Provide low-latency and differentiated services when write-intensive workloads are present in a shared environment. We have proposed the performance and endurance-centric mechanisms to evaluate the tradeoffs between performance and endurance. In the first approach, our goal is to achieve low storage latency and a long lifetime of the SSD simultane- ously, even for a write-heavy workload. Incorporating a significantly smaller amount of ENVM with SSD as a cache helps to achieve the said goal.SSDs using the NVMe (Non-Volatile Memory Express) interface can achieve low latency as the interface provides several advanced features. The second approach has iii explored such features to control the storage tail latency in a distributed environment. The ”predictable latency mode (PLM)” advanced feature helps to achieve determinis- tic storage latency. SSDs need to perform many background management operations to deal with the underlying flash technology traits, the most time-consuming ones be- ing garbage collection and wear leveling. The latency requirement of latency-sensitive applications violates when the I/O requests fall behind such management activities. PLM leverages SSD controllers to perform the background operations during a win- dow, called a ”non-deterministic window (NDWin)”. Whereas during the ”determin- istic window (DTWin)”, applications will experience no such operations. We have extended this feature in the distributed environment and showed how it helps achieve low storage latency when the proposed ”PLM coordinator (PLMC)” is incorporated. In a shared environment with write-intensive workloads present, result in latency peak for Read IO. Moreover, it is required to provide differentiated services with multiple QoS classes present in the workload mixture. We have extended the PLM concept on hybrid storage to realize the deterministic latency for tight tail-controlled appli- cations and assure differentiated services among multiple QoS applications. Since nearly all of the storage access in a data center is over the network, an end-to-end path consists of three components: The host component, Network component, and Storage Component. For latency-sensitive applications, the overall tail latency needs to consider all these components. In a NAS (Network Attached Storage) architecture, it is worth studying the QoS class aware services present at the different components to provide an overall low request-response latency. Therefore, it helps future research to embrace the gaps that have not been considered yet. / Computer and Information Science
3

Enhancing storage performance in virtualized environments: a pro-active approach

Sivathanu, Sankaran 17 May 2011 (has links)
Efficient storage and retrieval of data is critical in today's computing environments and storage systems need to keep up with the pace of evolution of other system components like CPU, memory etc., for building an overall efficient system. With virtualization becoming pervasive in enterprise and cloud-based infrastructures, it becomes vital to build I/O systems that better account for the changes in scenario in virtualized systems. However, the evolution of storage systems have been limited significantly due to adherence to legacy interface standards between the operating system and storage subsystem. Even though storage systems have become more powerful in the recent times hosting large processors and memory, thin interface to file system leads to wastage of vital information contained in the storage system from being used by higher layers. Virtualization compounds this problem with addition of new indirection layers that makes underlying storage systems even more opaque to the operating system. This dissertation addresses the problem of inefficient use of disk information by identifying storage-level opportunities and developing pro-active techniques to storage management. We present a new class of storage systems called pro-active storage systems (PaSS), which in addition to being compatible with existing I/O interface, exerts a limit degree of control over the file system policies by leveraging it's internal information. In this dissertation, we present our PaSS framework that includes two new I/O interfaces called push and pull, both in the context of traditional systems and virtualized systems. We demonstrate the usefulness of our PaSS framework by a series of case studies that exploit the information available in underlying storage system layer, for overall improvement in IO performance. We also built a framework to evaluate performance and energy of modern storage systems by implementing a novel I/O trace replay tool and an analytical model for measuring performance and energy of complex storage systems. We believe that our PaSS framework and the suite of evaluation tools helps in better understanding of modern storage system behavior and thereby implement efficient policies in the higher layers for better performance, data reliability and energy efficiency by making use of the new interfaces in our framework.
4

Cloud-native storage solutions for Kubernetes : A performance comparison

Andersson, Filip January 2023 (has links)
Kubernetes is a container orchestration system that has been rising in popularity in recent years. The modular nature of Kubernetes allows the usage of different storage solutions, and for cloud environments, cloud-native distributed storage solutions maybe attractive due to their redundant nature. There are many tools for cloud-native distributed storage available on the market today with differing features and performance. Choosing the correct one for an organisation can be difficult. Organisations utilising Kubernetes in cloud environments would like to be as performance efficient as possible to save on costs and resources. This study aims to offer a benchmark and analysis for some of the most popular tools, to help organisations choose the ‘best’ solution for their operational needs, from a performance perspective. The benchmarks compare three cloud-native distributed storage solutions, OpenEBS, Portworx, and Rook-Ceph on both Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS). For a baseline comparison, the study will also benchmark the cloud providers own solutions; Azure Disk Storage, and Amazon Elastic Block Storage. The study compares these solutions from three key metrics; bandwidth, latency, and IOPS, in both read and write performance. / <p>Det finns övrigt digitalt material (t.ex. film-, bild- eller ljudfiler) eller modeller/artefakter tillhörande examensarbetet som ska skickas till arkivet.</p><p>There are other digital material (eg film, image or audio files) or models/artifacts that belongs to the thesis and need to be archived.</p>
5

A two-dimensional hybrid with molybdenum disulfide nanocrystals strongly coupled on nitrogen-enriched graphene via mild temperature pyrolysis for high performance lithium storage

Tang, Yanping, Wu, Dongqing, Mai, Yiyong, Pan, Hao, Cao, Jing, Yang, Chongqing, Zhang, Fan, Feng, Xinliang 16 December 2019 (has links)
A novel 2D hybrid with MoS₂ nanocrystals strongly coupled on nitrogen-enriched graphene (MoS₂/NGg-C₃N₄) is realized by mild temperature pyrolysis (550 °C) of a self-assembled precursor (MoS₃/g-C₃N₄–H⁺/GO). With rich active sites, the boosted electronic conductivity and the coupled structure, MoS₂/NGg₋C₃N₄ achieves superior lithium storage performance.

Page generated in 0.1007 seconds