• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • 1
  • Tagged with
  • 7
  • 7
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Linux-Dateisysteme: XFS und JFS

Schreiber, Alexander 23 October 2000 (has links) (PDF)
Der Vortrag stellt zwei Filesysteme aus der kommerziellen Welt vor die derzeit auf Linux portiert werden: JFS von IBM und XFS von SGI. Es wird ein Ueberblick ueber die beiden Dateissysteme, ihre Eigenschaften und ihre Portiertung gegeben.
2

Comparison and End-to-End Performance Analysis of Parallel Filesystems

Kluge, Michael 20 September 2011 (has links) (PDF)
This thesis presents a contribution to the field of performance analysis for Input/Output (I/O) related problems, focusing on the area of High Performance Computing (HPC). Beside the compute nodes, High Performance Computing systems need a large amount of supporting components that add their individual behavior to the overall performance characteristic of the whole system. Especially file systems in such environments have their own infrastructure. File operations are typically initiated at the compute nodes and proceed through a deep software stack until the file content arrives at the physical medium. There is a handful of shortcomings that characterize the current state of the art for performance analyses in this area. This includes a system wide data collection, a comprehensive analysis approach for all collected data, an adjusted trace event analysis for I/O related problems, and methods to compare current with archived performance data. This thesis proposes to instrument all soft- and hardware layers to enhance the performance analysis for file operations. The additional information can be used to investigate performance characteristics of parallel file systems. To perform I/O analyses on HPC systems, a comprehensive approach is needed to gather related performance events, examine the collected data and, if necessary, to replay relevant parts on different systems. One larger part of this thesis is dedicated to algorithms that reduce the amount of information that are found in trace files to the level that is needed for an I/O analysis. This reduction is based on the assumption that for this type of analysis all I/O events, but only a subset of all synchronization events of a parallel program trace have to be considered. To extract an I/O pattern from an event trace, only these synchronization points are needed that describe dependencies among different I/O requests. Two algorithms are developed to remove negligible events from the event trace. Considering the related work for the analysis of a parallel file systems, the inclusion of counter data from external sources, e.g. the infrastructure of a parallel file system, has been identified as a major milestone towards a holistic analysis approach. This infrastructure contains a large amount of valuable information that are essential to describe performance effects observed in applications. This thesis presents an approach to collect and subsequently process and store the data. Certain ways how to correctly merge the collected values with application traces are discussed. Here, a revised definition of the term "performance counter" is the first step followed by a tree based approach to combine raw values into secondary values. A visualization approach for I/O patterns closes another gap in the analysis process. Replaying I/O related performance events or event patterns can be done by a flexible I/O benchmark. The constraints for the development of such a benchmark are identified as well as the overall architecture for a prototype implementation. Finally, different examples demonstrate the usage of the developed methods and show their potential. All examples are real use cases and are situated on the HRSK research complex and the 100GBit Testbed at TU Dresden. The I/O related parts of a Bioinformatics and a CFD application have been analyzed in depth and enhancements for both are proposed. An instance of a Lustre file system was deployed and tuned on the 100GBit Testbed by the extensive use of external performance counters.
3

Linux-Dateisysteme: XFS und JFS

Schreiber, Alexander 23 October 2000 (has links)
Der Vortrag stellt zwei Filesysteme aus der kommerziellen Welt vor die derzeit auf Linux portiert werden: JFS von IBM und XFS von SGI. Es wird ein Ueberblick ueber die beiden Dateissysteme, ihre Eigenschaften und ihre Portiertung gegeben.
4

Snapshots in large-scale distributed file systems

Stender, Jan 21 January 2013 (has links)
Viele moderne Dateisysteme unterstützen Snapshots zur Erzeugung konsistenter Online-Backups, zur Wiederherstellung verfälschter oder ungewollt geänderter Dateien, sowie zur Rückverfolgung von Änderungen an Dateien und Verzeichnissen. Während frühere Arbeiten zu Snapshots in Dateisystemen vorwiegend lokale Dateisysteme behandeln, haben moderne Trends wie Cloud- oder Cluster-Computing dazu geführt, dass die Datenhaltung in verteilten Speichersystemen an Bedeutung gewinnt. Solche Systeme umfassen häufig eine Vielzahl an Speicher-Servern, was besondere Herausforderungen mit Hinblick auf Skalierbarkeit, Verfügbarkeit und Ausfallsicherheit mit sich bringt. Diese Arbeit beschreibt einen Snapshot-Algorithmus für großangelegte verteilte Dateisysteme und dessen Integration in XtreemFS, ein skalierbares objektbasiertes Dateisystem für Grid- und Cloud-Computing-Umgebungen. Die zwei Bausteine des Algorithmus sind ein System zur effizienten Erzeugung und Verwaltung von Dateiinhalts- und Metadaten-Versionen, sowie ein skalierbares, ausfallsicheres Verfahren zur Aggregation bestimmter Versionen in einem Snapshot. Um das Problem einer fehlenden globalen Zeit zu bewältigen, implementiert der Algorithmus ein weniger restriktives, auf Zeitstempeln lose synchronisierter Server-Uhren basierendes Konsistenzmodell für Snapshots. Die wesentlichen Beiträge der Arbeit sind: 1) ein formales Modell von Snapshots und Snapshot-Konsistenz in verteilten Dateisystemen; 2) die Beschreibung effizienter Verfahren zur Verwaltung von Metadaten- und Dateiinhalts-Versionen in objektbasierten Dateisystemen; 3) die formale Darstellung eines skalierbaren, ausfallsicheren Snapshot-Algorithmus für großangelegte objektbasierte Dateisysteme; 4) eine detaillierte Beschreibung der Implementierung des Algorithmus in XtreemFS. Eine umfangreiche Auswertung belegt, dass der vorgestellte Algorithmus die Nutzerdatenrate kaum negativ beeinflusst, und dass er mit großen Zahlen an Snapshots und Versionen skaliert. / Snapshots are present in many modern file systems, where they allow to create consistent on-line backups, to roll back corruptions or inadvertent changes of files, and to keep a record of changes to files and directories. While most previous work on file system snapshots refers to local file systems, modern trends like cloud and cluster computing have shifted the focus towards distributed storage infrastructures. Such infrastructures often comprise large numbers of storage servers, which presents particular challenges in terms of scalability, availability and failure tolerance. This thesis describes snapshot algorithm for large-scale distributed file systems and its integration in XtreemFS, a scalable object-based file system for grid and cloud computing environments. The two building blocks of the algorithm are a version management scheme, which efficiently records versions of file content and metadata, as well as a scalable and failure-tolerant mechanism that aggregates specific versions in a snapshot. To overcome the lack of a global time in a distributed system, the algorithm implements a relaxed consistency model for snapshots, which is based on timestamps assigned by loosely synchronized server clocks. The main contributions of the thesis are: 1) a formal model of snapshots and snapshot consistency in distributed file systems; 2) the description of efficient schemes for the management of metadata and file content versions in object-based file systems; 3) the formal presentation of a scalable, fault-tolerant snapshot algorithm for large-scale object-based file systems; 4) a detailed description of the implementation of the algorithm as part of XtreemFS. An extensive evaluation shows that the proposed algorithm has no severe impact on user I/O, and that it scales to large numbers of snapshots and versions.
5

Reducing Size and Complexity of the Security-Critical Code Base of File Systems

Weinhold, Carsten 09 July 2014 (has links) (PDF)
Desktop and mobile computing devices increasingly store critical data, both personal and professional in nature. Yet, the enormous code bases of their monolithic operating systems (hundreds of thousands to millions of lines of code) are likely to contain exploitable weaknesses that jeopardize the security of this data in the file system. Using a highly componentized system architecture based on a microkernel (or a very small hypervisor) can significantly improve security. The individual operating system components have smaller code bases running in isolated address spaces so as to provide better fault containment. Their isolation also allows for smaller trusted computing bases (TCBs) of applications that comprise only a subset of all components. In my thesis, I built VPFS, a virtual private file system that is designed for such a componentized system architecture. It aims at reducing the amount of code and complexity that a file system implementation adds to the TCB of an application. The basic idea behind VPFS is similar to that of a VPN, which securely reuses an untrusted network: The core component of VPFS implements all functionality and cryptographic algorithms that an application needs to rely upon for confidentiality and integrity of file system contents. These security-critical cores reuse a much more complex and therefore untrusted file system stack for non-critical functionality and access to the storage device. Additional trusted components ensure recoverability.
6

Comparison and End-to-End Performance Analysis of Parallel Filesystems

Kluge, Michael 05 September 2011 (has links)
This thesis presents a contribution to the field of performance analysis for Input/Output (I/O) related problems, focusing on the area of High Performance Computing (HPC). Beside the compute nodes, High Performance Computing systems need a large amount of supporting components that add their individual behavior to the overall performance characteristic of the whole system. Especially file systems in such environments have their own infrastructure. File operations are typically initiated at the compute nodes and proceed through a deep software stack until the file content arrives at the physical medium. There is a handful of shortcomings that characterize the current state of the art for performance analyses in this area. This includes a system wide data collection, a comprehensive analysis approach for all collected data, an adjusted trace event analysis for I/O related problems, and methods to compare current with archived performance data. This thesis proposes to instrument all soft- and hardware layers to enhance the performance analysis for file operations. The additional information can be used to investigate performance characteristics of parallel file systems. To perform I/O analyses on HPC systems, a comprehensive approach is needed to gather related performance events, examine the collected data and, if necessary, to replay relevant parts on different systems. One larger part of this thesis is dedicated to algorithms that reduce the amount of information that are found in trace files to the level that is needed for an I/O analysis. This reduction is based on the assumption that for this type of analysis all I/O events, but only a subset of all synchronization events of a parallel program trace have to be considered. To extract an I/O pattern from an event trace, only these synchronization points are needed that describe dependencies among different I/O requests. Two algorithms are developed to remove negligible events from the event trace. Considering the related work for the analysis of a parallel file systems, the inclusion of counter data from external sources, e.g. the infrastructure of a parallel file system, has been identified as a major milestone towards a holistic analysis approach. This infrastructure contains a large amount of valuable information that are essential to describe performance effects observed in applications. This thesis presents an approach to collect and subsequently process and store the data. Certain ways how to correctly merge the collected values with application traces are discussed. Here, a revised definition of the term "performance counter" is the first step followed by a tree based approach to combine raw values into secondary values. A visualization approach for I/O patterns closes another gap in the analysis process. Replaying I/O related performance events or event patterns can be done by a flexible I/O benchmark. The constraints for the development of such a benchmark are identified as well as the overall architecture for a prototype implementation. Finally, different examples demonstrate the usage of the developed methods and show their potential. All examples are real use cases and are situated on the HRSK research complex and the 100GBit Testbed at TU Dresden. The I/O related parts of a Bioinformatics and a CFD application have been analyzed in depth and enhancements for both are proposed. An instance of a Lustre file system was deployed and tuned on the 100GBit Testbed by the extensive use of external performance counters.
7

Reducing Size and Complexity of the Security-Critical Code Base of File Systems

Weinhold, Carsten 14 January 2014 (has links)
Desktop and mobile computing devices increasingly store critical data, both personal and professional in nature. Yet, the enormous code bases of their monolithic operating systems (hundreds of thousands to millions of lines of code) are likely to contain exploitable weaknesses that jeopardize the security of this data in the file system. Using a highly componentized system architecture based on a microkernel (or a very small hypervisor) can significantly improve security. The individual operating system components have smaller code bases running in isolated address spaces so as to provide better fault containment. Their isolation also allows for smaller trusted computing bases (TCBs) of applications that comprise only a subset of all components. In my thesis, I built VPFS, a virtual private file system that is designed for such a componentized system architecture. It aims at reducing the amount of code and complexity that a file system implementation adds to the TCB of an application. The basic idea behind VPFS is similar to that of a VPN, which securely reuses an untrusted network: The core component of VPFS implements all functionality and cryptographic algorithms that an application needs to rely upon for confidentiality and integrity of file system contents. These security-critical cores reuse a much more complex and therefore untrusted file system stack for non-critical functionality and access to the storage device. Additional trusted components ensure recoverability.

Page generated in 0.066 seconds