Spelling suggestions: "subject:"atorage virtualization"" "subject:"atorage virtualizations""
1 |
Storage Virtualization: A Case Study on LinuxLin, Luen-Yung 28 June 2007 (has links)
In the era of explosive information, storage subsystem is becoming more and more important in our daily life and commercial markets. Because more and more data are recorded in the digital form and stored in the storage device, an intelligent mechanism is required to make the management of the digital data and storage devices more eficiently rather than simply keep increasing more storage equipment into a system. The concept of storage virtualization was introduced to solve this problem, by aggregating all the physical devices into a single virtual storage device and hidding the complexity of underlying block devices. Through this virtual layer, users can dynamically allocate and resize their virtual storage device to satisfy their need, and they can also use the methods provided by the virtual layer to organize data more efficiently.
Linux Logical Volume Manager 2 (LVM2) is an implementation of storage virtualization on the Linux operation system. It includes three components: the kernel-space devicemapper, the user-space device-mapper support library (libdevmapper), and the user-space LVM2 toolset. This thesis will focus on the kernel-space device-mapper, which provides virtualization mechanism for user-space logical volume manager. The organization of this thesis is composed of : (1) Introduce novel technologies in the recent years, (2) Provide an advanced document about the internals of device-mapper, (3) Try optimizing the mapping table algorithm, and (4) Evaluate the performance of device-mapper.
|
2 |
Extensible Networked-storage Virtualization with Metadata Management at the Block LevelFlouris, Michail D. 24 September 2009 (has links)
Increased scaling costs and lack of desired features is leading to the evolution of high-performance storage systems from centralized architectures and specialized hardware to
decentralized, commodity storage clusters. Existing systems try to address storage cost and management issues at the filesystem level. Besides dictating the use of a specific filesystem, however, this approach leads to increased complexity and load imbalance towards the file-server side,
which in turn increase costs to scale.
In this thesis, we examine these problems at the block-level. This approach has several advantages, such as transparency, cost-efficiency, better resource utilization,
simplicity and easier management.
First of all, we explore the mechanisms, the merits, and the overheads associated with advanced metadata-intensive functionality at the block level, by providing versioning at
the block level. We find that block-level versioning has low overhead and offers transparency and simplicity advantages over filesystem-based approaches.
Secondly, we study the problem of providing extensibility required by diverse and changing application needs that may
use a single storage system. We provide support for (i)adding desired functions as block-level extensions, and (ii)flexibly combining them to create modular I/O
hierarchies. In this direction, we design, implement and evaluate an extensible block-level storage virtualization framework, Violin, with support for metadata-intensive
functions. Extending Violin we build Orchestra, an extensible framework for cluster storage virtualization and scalable storage sharing at the block-level. We show that Orchestra's enhanced block interface can substantially simplify the design of higher-level storage services, such
as cluster filesystems, while being scalable.
Finally, we consider the problem of consistency and availability in decentralized commodity clusters. We propose
RIBD, a novel storage system that provides support for handling both data and metadata consistency issues at the block layer. RIBD uses the notion of consistency intervals
(CIs) to provide fine-grain consistency semantics on sequences of block level operations by means of a lightweight transactional mechanism. RIBD relies on
Orchestra's virtualization mechanisms and uses a roll-back recovery mechanism based on low-overhead block-level versioning. We evaluate RIBD on a cluster of 24 nodes, and
find that it performs comparably to two popular cluster filesystems, PVFS and GFS, while offering stronger consistency guarantees.
|
3 |
Extensible Networked-storage Virtualization with Metadata Management at the Block LevelFlouris, Michail D. 24 September 2009 (has links)
Increased scaling costs and lack of desired features is leading to the evolution of high-performance storage systems from centralized architectures and specialized hardware to
decentralized, commodity storage clusters. Existing systems try to address storage cost and management issues at the filesystem level. Besides dictating the use of a specific filesystem, however, this approach leads to increased complexity and load imbalance towards the file-server side,
which in turn increase costs to scale.
In this thesis, we examine these problems at the block-level. This approach has several advantages, such as transparency, cost-efficiency, better resource utilization,
simplicity and easier management.
First of all, we explore the mechanisms, the merits, and the overheads associated with advanced metadata-intensive functionality at the block level, by providing versioning at
the block level. We find that block-level versioning has low overhead and offers transparency and simplicity advantages over filesystem-based approaches.
Secondly, we study the problem of providing extensibility required by diverse and changing application needs that may
use a single storage system. We provide support for (i)adding desired functions as block-level extensions, and (ii)flexibly combining them to create modular I/O
hierarchies. In this direction, we design, implement and evaluate an extensible block-level storage virtualization framework, Violin, with support for metadata-intensive
functions. Extending Violin we build Orchestra, an extensible framework for cluster storage virtualization and scalable storage sharing at the block-level. We show that Orchestra's enhanced block interface can substantially simplify the design of higher-level storage services, such
as cluster filesystems, while being scalable.
Finally, we consider the problem of consistency and availability in decentralized commodity clusters. We propose
RIBD, a novel storage system that provides support for handling both data and metadata consistency issues at the block layer. RIBD uses the notion of consistency intervals
(CIs) to provide fine-grain consistency semantics on sequences of block level operations by means of a lightweight transactional mechanism. RIBD relies on
Orchestra's virtualization mechanisms and uses a roll-back recovery mechanism based on low-overhead block-level versioning. We evaluate RIBD on a cluster of 24 nodes, and
find that it performs comparably to two popular cluster filesystems, PVFS and GFS, while offering stronger consistency guarantees.
|
Page generated in 0.1148 seconds