• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

BlobSeer: Towards efficient data storage management for large-scale, distributed systems

Nicolae, Bogdan 30 November 2010 (has links) (PDF)
With data volumes increasing at a high rate and the emergence of highly scalable infrastructures (cloud computing, petascale computing), distributed management of data becomes a crucial issue that faces many challenges. This thesis brings several contributions in order to address such challenges. First, it proposes a set of principles for designing highly scalable distributed storage systems that are optimized for heavy data access concurrency. In particular, it highlights the potentially large benefits of using versioning in this context. Second, based on these principles, it introduces a series of distributed data and metadata management algorithms that enable a high throughput under concurrency. Third, it shows how to efficiently implement these algorithms in practice, dealing with key issues such as high-performance parallel transfers, efficient maintainance of distributed data structures, fault tolerance, etc. These results are used to build BlobSeer, an experimental prototype that is used to demonstrate both the theoretical benefits of the approach in synthetic benchmarks, as well as the practical benefits in real-life, applicative scenarios: as a storage backend for MapReduce applications, as a storage backend for deployment and snapshotting of virtual machine images in clouds, as a quality-of-service enabled data storage service for cloud applications. Extensive experimentations on the Grid'5000 testbed show that BlobSeer remains scalable and sustains a high throughput even under heavy access concurrency, outperforming by a large margin several state-of-art approaches.
2

Simulation générique et contribution à l'optimisation de la robustesse des systèmes de données à large échelle / Generic simulation and contribution to the robustness optimization of large-scale data storage systems

Gougeaud, Sebastien 11 May 2017 (has links)
La capacité des systèmes de stockage de données ne cesse de croître pour atteindre actuellement l’échelle de l’exaoctet, ce qui a un réel impact sur la robustesse des systèmes de stockage. En effet, plus le nombre de disques contenus dans un système est grand, plus il est probable d’y avoir une défaillance. De même, le temps de la reconstruction d’un disque est proportionnel à sa capacité. La simulation permet le test de nouveaux mécanismes dans des conditions quasi réelles et de prédire leur comportements. Open and Generic data Storage system Simulation tool (OGSSim), l’outil que nous proposons, supporte l’hétérogénéité et la taille importante des systèmes actuels. Sa décomposition modulaire permet d’entreprendre chaque technologie de stockage, schéma de placement ou modèle de calcul comme des briques pouvant être combinées entre elles pour paramétrer au mieux la simulation. La robustesse étant un paramètre critique dans ces systèmes, nous utilisons le declustered RAID pour assurer la distribution de la reconstruction des données d’un disque en cas de défaillance. Nous proposons l’algorithme Symmetric Difference of Source Sets (SD2S) qui utilise le décalage des blocs de données pour la création du schéma de placement. Le pas du décalage est issu du calcul de la proximité des ensembles de provenance logique des blocs d’un disque physique. Pour évaluer l’efficacité de SD2S, nous l’avons comparé à la méthode Crush, exemptée des réplicas. Il en résulte que la création du schéma de placement, aussi bien en mode normal qu’en mode défaillant, est plus rapide avec SD2S, et que le coût en espace mémoire est également réduit (nul en mode normal). En cas de double défaillance, SD2S assure la sauvegarde d’une partie, voire de la totalité, des données / Capacity of data storage systems does not cease to increase to currently reach the exabyte scale. This observation gets a real impact on storage system robustness. In fact, the more the number of disks in a system is, the greater the probability of a failure happening is. Also, the time used for a disk reconstruction is proportional to its size. Simulation is an appropriate technique to test new mechanisms in almost real conditions and predict their behavior. We propose a new software we callOpen and Generic data Storage system Simulation tool (OGSSim). It handles the heterogeneity andthe large size of these modern systems. Its modularity permits the undertaking of each storage technology, placement scheme or computation model as bricks which can be added and combined to optimally configure the simulation.Robustness is a critical issue for these systems. We use the declustered RAID to distribute the data reconstruction in case of a failure. We propose the Symmetric Difference of Source Sets (SD2S) algorithmwhich uses data block shifhting to achieve the placement scheme. The shifting offset comes from the computation of the distance between logical source sets of physical disk blocks. To evaluate the SD2S efficiency, we compared it to Crush method without replicas. It results in a faster placement scheme creation in normal and failure modes with SD2S and in a significant reduced memory space cost (null without failure). Furthermore, SD2S ensures the partial, if not total, reconstruction of data in case of multiple failures.

Page generated in 0.0887 seconds