Return to search

Design and implementation of the Chunks feature

Master of Science / Department of Computing and Information Sciences / Daniel A. Andresen / The recovery-driven design of the file system has been one of the most challenging fields
over the major trends in operating systems. This field has assumed considerable importance in the past decades as the disk sizes have been increasing without a comparable increase in the disk I/O bandwidth and seek time. The rapid increase in the storage size is expected to become constant in the future due to the growing market demand and the continuous database size increment of many companies and major businesses. Due to the same reason, the cost of the average file system checking time has increased without a significant improvement in the disk I/O bandwidth and seek time performance. Operating system bugs, power outages, and hardware failures which result in a file system crash were the main reasons behind the innovation of novel recovery approaches such as Journaling and soft-updates. Although such approaches avoided complete file system checking by checking solely inconsistencies in file
system metadata, it become inevitable for them to check the entire file system for inconsistencies because of the previously entioned types of problems. One of the emerging recovery-driven designs which considers minimizing file system checking cost is the Chunkfs files ystem. Chunkfs file system introduces an innovative look into the file system design by dividing the file system layout into smaller chunks, each one of which represents a smaller scale file system by itself.
In our work we probed an alternative recovery-driven design which is considerably inspired by the Chunkfs concepts and follows the same design guidelines. This recovery-riven design is introduced by adding a new feature to the file system which best utilizes the existing underlying design through considering the block groups as individual chunks, confining their files and directories spanning across different block groups by means of special controlled continuation links. These links provide a fault isolation means by circumscribing the
checking of the file system to only these block groups which appear to be dirty after a crash, a method resulting in a moderate reduction in file system checking cost. We also probed different metrics of metadata sizes, and the probable cost of files and directories expansion across the different block groups.

  1. http://hdl.handle.net/2097/911
Identiferoai:union.ndltd.org:KSU/oai:krex.k-state.edu:2097/911
Date January 1900
CreatorsNory, Nawar A.
PublisherKansas State University
Source SetsK-State Research Exchange
Languageen_US
Detected LanguageEnglish
TypeThesis

Page generated in 0.0025 seconds