Spelling suggestions: "subject:"file systems"" "subject:"pile systems""
1 |
Increasing the performance of storage services for local area networksWilson, Timothy David January 1992 (has links)
No description available.
|
2 |
Block-Based Distributed File SystemsMcGregor, Anthony James January 1997 (has links)
Distributed file systems have become popular because they allow information to be shared be between computers in a natural way. A distributed file system often forms a central building block in a distributed system. Currently most distributed file systems are built using a communications interface that transfers messages about files between machines. This thesis proposes a different, lower level, communications interface. This `block-based' interface exchanges information about the blocks that make up the file but not about the files themselves. No other distributed file system is built this way. By demonstrating that a distributed file system can be implemented in a block-based manner, this thesis opens the way for many advances in distributed file systems. These include a reduction of the processing required at the server, uniformity in managing file blocks and fine-grained placement and replication of data. The simple communications model also lends itself to efficient implementation both at the server and in the communications protocols that support the interface. These advantages come at the cost of a more complex client implementation and the need for a lower level consistency mechanism. A block-based distributed file system (BB-NFS) has been implemented. BB-NFS provides the Unix file system interface and demonstrates the feasibility and implementability of the block-based approach. Experience with the implementation lead to the development of a lock cache mechanism which gives a large improvement in the performance of the prototype. Although it has not been directly measured it is plausible that the prototype will perform better than the file based approach. The block-based approach has much to offer future distributed file system developers. This thesis introduces the approach and its advantages, demonstrates its feasibility and shows that it can be implemented in a way that performs well.
|
3 |
A Tag-Based, Logical Access-Control Framework for Personal File SharingMazurek, Michelle L. 01 May 2014 (has links)
People store and share ever-increasing numbers of digital documents, photos, and other files, both on personal devices and within online services. In this environment, proper access control is critical to help users obtain the benefits of sharing varied content with different groups of people while avoiding trouble at work, embarrassment, identity theft, and other problems related to unintended disclosure. Current approaches often fail, either because they insufficiently protect data or because they confuse users about policy specification. Historically, correctly managing access control has proven difficult, timeconsuming, and error-prone, even for experts; to make matters worse, access control remains a secondary task most non-experts are unwilling to spend significant time on.
To solve this problem, access control for file-sharing tools and services should provide verifiable security, make policy configuration and management simple and understandable for users, reduce the risk of user error, and minimize the required user effort. This thesis presents three user studies that provide insight into people’s access-control needs and preferences. Drawing on the results of these studies, I present Penumbra, a prototype distributed file system that combines semantic, tag-based policy specification with logicbased access control, flexibly supporting intuitive policies while providing high assurance of correctness. Penumbra is evaluated using a set of detailed, realistic case studies drawn from the presented user studies. Using microbenchmarks and traces generated from the case studies, Penumbra can enforce users’ policies with overhead less than 5% for most system calls. Finally, I present lessons learned, which can inform the further development of usable access-control mechanisms both for sharing files and in the broader context of personal data.
|
4 |
Macro-modeling and energy efficiency studies of file management in embedded systems with flash memoryGoyal, Nitesh 16 August 2006 (has links)
Technological advancements in computer hardware and software have made embedded
systems highly affordable and widely used. Consumers have ever increasing demands
for powerful embedded devices such as cell phones, PDAs and media players. Such
complex and feature-rich embedded devices are strictly limited by their battery life-
time. Embedded systems typically are diskless and use flash for secondary storage
due to their low power, persistent storage and small form factor needs. The energy
efficiency of a processor and flash in an embedded system heavily depends on the
choice of file system in use. To address this problem, it is necessary to provide sys-
tem developers with energy profiles of file system activities and energy efficient file
systems. In the first part of the thesis, a macro-model for the CRAMFS file system
is established which characterizes the processor and flash energy consumption due to
file system calls. This macro-model allows a system developer to estimate the energy
consumed by CRAMFS without using an actual power setup. The second part of
the thesis examines the effects of using non-volatile memory as a write-behind buffer
to improve the energy efficiency of JFFS2. Experimental results show that a 4KB
write-behind buffer significantly reduces energy consumption by up to 2-3 times for
consecutive small writes. In addition, the write-behind buffer conserves flash space
since transient data may never be written to flash.
|
5 |
StepTree : A File System Visualizer / StepTree : Ett visualiseringsverktyg för filsystemBladh, Thomas January 2002 (has links)
A 3D visualization tool for file system hierarchies is presented. The visualization technique used is based on the Tree-map / nested Venn diagram concept and is capable of visualizing metrics and attributes such as size, change and file type for thousands of nodes simultaneously. Size is visualized through node base area, change is visualized through the ghosting and hiding of unchanged nodes and file type is visualized through colors. Actions such as navigation and selection are performed exclusively in 3D. Finally a method for improving the visibility of nodes through the equalization of sibling nodes is proposed. / Ett 3D visualiseringsverktyg för filsystem presenteras. Visualiseringstekniken är baserad på Tree-maps / Venn diagram och kan visualisera attribut såsom storlek, förändring och filtyp för tusentals noder samtidigt. Storlek visualiseras genom noders bas area, förändringar visualiseras genom ghosting (avfokusering) och hiding (gömning) av oförändrade noder och filtyp visualiseras med färger. Slutligen så föreslås en metod för att förbättra synligheten hos noder genom utjämning. / Tel. 0457-26310 Alt. 044-246025 (annars e-mail)
|
6 |
Comparing Remote Data Transfer Rates of Compact Muon Solenoid Jobs with Xrootd and LustreKaganas, Gary H 01 April 2014 (has links)
To explore the feasibility of processing Compact Muon Solenoid (CMS) analysis jobs across the wide area network, the FIU CMS Tier-3 center and the Florida CMS Tier-2 center designed a remote data access strategy. A Kerberized Lustre test bed was installed at the Tier-2 with the design to provide storage resources to private-facing worker nodes at the Tier-3. However, the Kerberos security layer is not capable of authenticating resources behind a private network. As a remedy, an xrootd server on a public-facing node at the Tier-3 was installed to export the file system to the private-facing worker nodes. We report the performance of CMS analysis jobs processed by the Tier-3 worker nodes accessing data from a Kerberized Lustre file. The processing performance of this configuration is benchmarked against a direct connection to the Lustre file system, and separately, where the xrootd server is near the Lustre file system.
|
7 |
PRACTICAL CLOUD COMPUTING INFRASTRUCTUREJames A Lembke (10276463) 12 March 2021 (has links)
<div>Cloud and parallel computing are fundamental components in the processing of large data sets. Deployments of distributed computers require network infrastructure that is fast, efficient, and secure. Software Defined Networking (SDN) separates the forwarding of network data by switches (data plane) from the setting and managing of network policies (control plane). While this separation provides flexibility for setting network policies affecting the establishment of network flows in the data plane, it provides little to no fault tolerance for failures, either benign or caused by corrupted/malicious applications. Such failures can cause network flows to be incorrectly routed through the network or stop such flows altogether. Without protection against faults, cloud network providers using SDN run the risk of inefficient allocation of network resources or even data loss. Furthermore, the asynchronous nature existing protocols for SDN does not provide a mechanism for consistency in network policy updates across multiple switches.</div><div>In addition, cloud and parallel applications require an efficient means for accessing local system data (input data sets, temporary storage locations, etc.). While in many cases it may be possible for a process to access this data by making calls directly to a file system (FS) kernel driver, this is not always possible (e.g. when using experimental distributed FSs where the needed libraries for accessing the FS only exist in user space).</div><div>This dissertation provides a design for fault tolerance of SDN and infrastructure for advancing the performance of user space FSs. It is divided into three main parts. The first part describes a fault tolerant, distributed SDN control plane framework. The second part expands upon the fault tolerant approach to SDN control plane by providing a practical means for dynamic control plane membership as well as providing a simple mechanism for controller authentication through threshold signatures. The third part describes an efficient framework for user space FS access.</div><div>This research makes three contributions. First, the design, specification, implementation, and evaluation of a method for fault tolerant SDN control plane that is inter-operable with existing control plane applications involving minimal instrumentation of the data plane runtime. Second, the design, specification, implementation and evaluation of a mechanism for dynamic SDN control plane membership that all ensure consistency of network policy updates and minimizes switch overhead through the use of distributed key generation and threshold signatures. Third, the design, specification, implementation, and evaluation of a user space FS access framework that is correct to the Portable Operating System Interface (POSIX) specification with significantly better performance over existing user space access methods, while requiring no implementation changes for application programmers.</div>
|
8 |
Exploration and Integration of File Systems in LlamaOSCraig, Kyle January 2014 (has links)
No description available.
|
9 |
Οργανικά συστήματα αρχείωνΠασιόπουλος, Ανδρέας 04 December 2012 (has links)
Με αυτή την εργασία προτείνουμε και υποστηρίζουμε ένα νέο πρότυπο για τα συστήματα αρχείων νέας γενιάς. Σε αυτό το πρότυπο, η παραδοσιακή άποψη ενός αρχείου αντικαθίσταται από την έννοια της πληροφοριακής μονάδας (information unit) και η παραδοσιακή αντίληψη των ιεραρχικών συστημάτων αρχείων αντικαθίσταται από ένα συνεχώς εξελισσόμενο χώρο δυναμικά αλληλένδετων πληροφοριακών μονάδων. Ένα Οργανικό Σύστημα Αρχείων (OFS) ορίζεται ως ένα σύστημα το οποίο αναπτύσσεται φυσικά και δεν υπόκειται σε τεχνητούς κανόνες και προκαθορισμένους, στατικούς τρόπους προβολής προς τους χρήστες του. Στο επίκεντρο του OFS βρίσκονται νέες αφαιρέσεις που υποστηρίζουν ένα συνεχώς εξελισσόμενο σύνολο πληροφοριακών μονάδων, χαρακτηρισμών των χρηστών για αυτές, και σχέσεων που δημιουργούνται μεταξύ τους από την πρόσβαση των χρηστών σε αυτές. Οι αφαιρέσεις αυτές επιτρέπουν το ίδιο σύστημα και το περιεχόμενό του να είναι ορατό με διαφορετικό τρόπο από διαφορετικούς τύπους χρηστών, σύμφωνα με τις τρέχουσες πληροφοριακές τους ανάγκες.
Το OFS είναι ανθρωποκεντρικό, καθώς απαιτείται ανθρώπινη συνεισφορά για το χαρακτηρισμό των πληροφοριακών μονάδων και για την ανακάλυψη και το σχολιασμό των μεταξύ τους σχέσεων. Δεδομένου αυτού, στην καρδιά του OFS βρίσκονται αλγόριθμοι για την αναζήτηση βάσει περιεχομένου στα αποθηκευμένα αρχεία.
Στην εργασία αυτή εκθέτουμε τα αποτελέσματα της μέχρι τώρα έρευνάς μας, συμπεριλαμβανομένης μιας υλοποίησης σε επίπεδο πυρήνα του λειτουργικού συστήματος, των βασικών χαρακτηριστικών του OFS, καθώς και τις σχετικές μετρήσεις απόδοσης προς απόδειξη της βιωσιμότητας της προσέγγισής μας. Συζητάμε στη συνέχεια, τις προκλήσεις που παραμένουν και τον αντίκτυπο που μπορεί να έχει το OFS στις σχετικές προσπάθειες έρευνας και ανάπτυξης, επισημαίνοντας τη σχετική έρευνα από άλλους τομείς, όπως η Ανάκτηση Πληροφορίας, το Κοινωνικό Λογισμικό, οι Διεπαφές Χρηστών, και η Διαχείριση Δεδομένων. / We propose and advocate a new paradigm for the next-generation file systems. In it, the traditional view of a file is replaced by the notion of an information unit and the traditional notion of hierarchical filesystems is replaced by an ever-evolving space of dynamically inter-related information units. An Organic File System (OFS) is defined as a system, which develops naturally and which does not conform to artificial rules and predefined, static ways of being viewed by its users. At the core of OFS lie novel abstractions which support a continuously evolving set of information units, users' characterizations of them, and relationships established between them by users accessing them. The abstractional also facilitate the same system and its contents to be viewed differently by different types of users, based on the current information needs.
OFS is human-centered, as human input is used to characterize information units and to discover and annotate relationships between units. Given this, at the heart of OFS lie algorithms for content-based search of stored files.
We report our R&D efforts so far, including a kernel-level architecture and implementation of the basic features of OFS and relevant performance measures establishing the viability of our approach. We then discuss the large number of challenges that remain and the impact OFS can have in relevant R\&D efforts, highlighting relevant research from other fields, such as Information Retrieval, Social Software, User Interfaces, and Data Management.
|
10 |
Scale and Concurrency of Massive File System DirectoriesPatil, Swapnil 01 May 2013 (has links)
File systems store data in files and organize these files in directories. Over decades, file systems have evolved to handle increasingly large files: they distribute files across a cluster of machines, they parallelize access to these files, they decouple data access from metadata access, and hence they provide scalable file access for high-performance applications. Sadly, most cluster-wide file systems lack any sophisticated support for large directories. In fact, most cluster file systems continue to use directories that were designed for humans, not for large-scale applications. The former use-case typically involves hundreds of files and infrequent concurrent mutations in each directory, while the latter use-case consists of tens of thousands of concurrent threads that simultaneously create large numbers of small files in a single directory at very high speeds. As a result, most cluster file systems exhibit very poor file create rate in a directory either due to limited scalability from using a single centralized directory server or due to reduced concurrency from using a system-wide synchronization mechanism.
This dissertation proposes a directory architecture called GIGA+ that enables a directory in a cluster file system to store millions of files and sustain hundreds of thousands of concurrent file creations every second. GIGA+ makes two indexing technique to scale out a growing directory on many servers and an efficient layered design to scale up performance. GIGA+ uses a hash-based, incremental partitioning algorithm that enables highly concurrent directory indexing through asynchrony and eventual consistency of the internal indexing state (while providing strong consistency guarantees to the application data). This dissertation analyzes several trade-offs between data migration overhead, load balancing effectiveness, directory scan performance, and entropy of indexing state made by the GIGA+ design, and compares them with policies used in other systems. GIGA+ also demonstrates a modular implementation that separates directory distribution from directory representation. It layers a client-server middleware, which spreads work among many GIGA+ servers, on top of a backend storage system, which manages on-disk directory representation. This dissertation studies how system behavior is tightly dependent on both the indexing scheme and the on-disk implementations, and evaluates how the system performs for different backend configurations including local and shared-disk stores. The GIGA+ prototype delivers highly scalable directory performance (that exceeds the most demanding Petascale-era requirements), provides the traditional UNIX file system interface (that can run applications without any modifications) and offers a new functionality layered on existing cluster file systems (that lack support for distributed directories)contributions: a concurrent
|
Page generated in 0.0615 seconds