Spelling suggestions: "subject:"risk atorage"" "subject:"risk 2storage""
1 |
Persistence and Node FailureRecovery in Strongly Consistent Key-Value DatastoreEhsan ul Haque, Muhammad January 2012 (has links)
Consistency preservation of replicated data is a critical aspect for distributed databaseswhich are strongly consistent. Further, in fail-recovery model each process also needs todeal with the management of stable storage and amnesia [1]. CATS is a key/value datastore which combines the Distributed Hash Table (DHT) like scalability and selforganization and also provides atomic consistency of the replicated items. However beingan in memory data store with consistency and partition tolerance (CP), it suffers frompermanent unavailability in the event of majority failure. The goals of this thesis were twofold (i) to implement disk persistent storage in CATS,which would allow the records and state of the nodes to be persisted on disk and (ii) todesign nodes failure recovery-algorithm for CATS which enable the system to run with theassumption of a Fail Recovery model without violating consistency. For disk persistent storage two existing key/value databases LevelDB [2] and BerkleyDB[3] are used. LevelDB is an implementation of log structured merged trees [4] where asBerkleyDB is an implementation of log structured B+ trees [5]. Both have been used as anunderlying local storage for nodes and throughput and latency of the system with each isdiscussed. A technique to improve the performance by allowing concurrent operations onthe nodes is also discussed. The nodes failure-recovery algorithm is designed with a goalto allow the nodes to crash and then recover without violating consistency and also toreinstate availability once the majority of nodes recover. The recovery algorithm is based onpersisting the state variables of Paxos [6] acceptor and proposer and consistent groupmemberships. For fault-tolerance and recovery, processes also need to copy records from the replicationgroup. This becomes problematic when the number of records and the amount of data ishuge. For this problem a technique for transferring key/value records in bulk is alsodescribed, and its effect on the latency and throughput of the system is discussed.
|
2 |
Extensible Networked-storage Virtualization with Metadata Management at the Block LevelFlouris, Michail D. 24 September 2009 (has links)
Increased scaling costs and lack of desired features is leading to the evolution of high-performance storage systems from centralized architectures and specialized hardware to
decentralized, commodity storage clusters. Existing systems try to address storage cost and management issues at the filesystem level. Besides dictating the use of a specific filesystem, however, this approach leads to increased complexity and load imbalance towards the file-server side,
which in turn increase costs to scale.
In this thesis, we examine these problems at the block-level. This approach has several advantages, such as transparency, cost-efficiency, better resource utilization,
simplicity and easier management.
First of all, we explore the mechanisms, the merits, and the overheads associated with advanced metadata-intensive functionality at the block level, by providing versioning at
the block level. We find that block-level versioning has low overhead and offers transparency and simplicity advantages over filesystem-based approaches.
Secondly, we study the problem of providing extensibility required by diverse and changing application needs that may
use a single storage system. We provide support for (i)adding desired functions as block-level extensions, and (ii)flexibly combining them to create modular I/O
hierarchies. In this direction, we design, implement and evaluate an extensible block-level storage virtualization framework, Violin, with support for metadata-intensive
functions. Extending Violin we build Orchestra, an extensible framework for cluster storage virtualization and scalable storage sharing at the block-level. We show that Orchestra's enhanced block interface can substantially simplify the design of higher-level storage services, such
as cluster filesystems, while being scalable.
Finally, we consider the problem of consistency and availability in decentralized commodity clusters. We propose
RIBD, a novel storage system that provides support for handling both data and metadata consistency issues at the block layer. RIBD uses the notion of consistency intervals
(CIs) to provide fine-grain consistency semantics on sequences of block level operations by means of a lightweight transactional mechanism. RIBD relies on
Orchestra's virtualization mechanisms and uses a roll-back recovery mechanism based on low-overhead block-level versioning. We evaluate RIBD on a cluster of 24 nodes, and
find that it performs comparably to two popular cluster filesystems, PVFS and GFS, while offering stronger consistency guarantees.
|
3 |
Extensible Networked-storage Virtualization with Metadata Management at the Block LevelFlouris, Michail D. 24 September 2009 (has links)
Increased scaling costs and lack of desired features is leading to the evolution of high-performance storage systems from centralized architectures and specialized hardware to
decentralized, commodity storage clusters. Existing systems try to address storage cost and management issues at the filesystem level. Besides dictating the use of a specific filesystem, however, this approach leads to increased complexity and load imbalance towards the file-server side,
which in turn increase costs to scale.
In this thesis, we examine these problems at the block-level. This approach has several advantages, such as transparency, cost-efficiency, better resource utilization,
simplicity and easier management.
First of all, we explore the mechanisms, the merits, and the overheads associated with advanced metadata-intensive functionality at the block level, by providing versioning at
the block level. We find that block-level versioning has low overhead and offers transparency and simplicity advantages over filesystem-based approaches.
Secondly, we study the problem of providing extensibility required by diverse and changing application needs that may
use a single storage system. We provide support for (i)adding desired functions as block-level extensions, and (ii)flexibly combining them to create modular I/O
hierarchies. In this direction, we design, implement and evaluate an extensible block-level storage virtualization framework, Violin, with support for metadata-intensive
functions. Extending Violin we build Orchestra, an extensible framework for cluster storage virtualization and scalable storage sharing at the block-level. We show that Orchestra's enhanced block interface can substantially simplify the design of higher-level storage services, such
as cluster filesystems, while being scalable.
Finally, we consider the problem of consistency and availability in decentralized commodity clusters. We propose
RIBD, a novel storage system that provides support for handling both data and metadata consistency issues at the block layer. RIBD uses the notion of consistency intervals
(CIs) to provide fine-grain consistency semantics on sequences of block level operations by means of a lightweight transactional mechanism. RIBD relies on
Orchestra's virtualization mechanisms and uses a roll-back recovery mechanism based on low-overhead block-level versioning. We evaluate RIBD on a cluster of 24 nodes, and
find that it performs comparably to two popular cluster filesystems, PVFS and GFS, while offering stronger consistency guarantees.
|
Page generated in 0.0574 seconds