Spelling suggestions: "subject:"erasure coding"" "subject:"erasure boding""
1 |
Performance Evaluation of Gluster and Compuverde Storage Systems : Comparative analysisRajana, Poojitha January 2016 (has links)
Context. Big Data and Cloud Computing nowadays require large amounts of storage that are accessible by many servers. To overcome the performance bottlenecks and single point of failure distributed storage systems came into force. So, our main aim in this thesis is evaluating the performance of these storage systems. A file coding technique is used that is the erasure coding which will help in data protection for the storage systems. Objectives. In this study, we investigate the performance evaluation of distributed storage system and understand the effect on performance for various patterns of I/O operations that is the read and write and also different measurement approaches for storage performance. Methods. The method is to use synthetic workload generator by streaming and transcoding video data as well as benchmark tool which generates the workload like SPECsfs2014 is used to evaluate the performance of distributed storage systems of GlusterFS and Compuverde which are file based storage. Results. In terms of throughput results, Gluster and Compuverde perform similar for both NFS and SMB server. The average latency results for both NFS and SMB shares indicate that Compuverde has lower latency. When comparing results of both Compuverde and Gluster, Compuverde delivers 100% IOPS with NFS server and Gluster delivers relatively near to the requested OP rate and with SMB server Gluster delivers 100% IOPS and Compuverde delivers more than the requested OP rate.
|
2 |
Evaluating Energy Consumption of Distributed Storage Systems : Comparative analysisKolli, Samuel Sushanth January 2016 (has links)
Context : Big Data and Cloud Computing nowadays require large amounts of storage that are accessible by many servers. The Energy consumed by these servers as well as that consumed by hosts providing the storage has been growing rapidly over the recent years. There are various approaches to save energy both at the hardware and software level, respectively. In the context of software, this challenge requires identification of new development methodologies that can help reduce the energy footprint of the Distributed Storage System. Until recently, reducing the energy footprint of Distributed Storage Systems is a challenge because there is no new methodology implemented to reduce the energy footprint of the Distributed Storage Systems. To tackle this challenge, we evaluate the energy consumption of Distributed Storage Systems by using a Power Application Programming Interface (PowerAPI) that monitors, in real-time, the energy consumed at the granularity of a system process. Objectives : In this study we investigate the Energy Consumption of distributed storage system. We also attempt to understand the effect on energy consumption for various patters of video streams. Also we have observed different measurement approaches for energy performance. Methods : The method is to use a power measuring software library while a synthetic load generator generates the load i.e., video data streams. The Tool which generates the workload is Standard Performance Evaluation Corporation Solution File Server (SPECsfs 2014) and PowerAPI is the software power monitoring library to evaluate the energy consumption of distributed storage systems of GlusterFS and Compuverde. Results : The mean and median values of power samples in mill watts for Compuverde higher than Gluster. For Compuverde the mean and median values until the load increment of three streams was around a 400 milliwatt value. The values of mean and median for the Gluster system were gradually increasing. Conclusions : The results show Compuverde having a higher consumption of energy than Gluster as it has a higher number of running processes that implement additional features that do not exist in Gluster. Also we have concluded that the conpuverde performed better for higher values of Load i.e., video data streams. / <p>Topic : Evaluating Energy Consumption of Distributed Storage Systems</p><p>Advisor: Dr. Dragos Ilie, Senior Lecturer, BTH</p><p>External Advisor: Stefan Bernbo,CEO, Compuverde AB</p><p>Student: Samuel Sushanth Kolli</p><p>The report gives a clear description of Distributed Storage Sytems and their Energy consumption with Performance Evaluation.</p><p>The report also includes the complete description and working of SpecSFS 2014 and PowerAPI Tool.</p> / Performance Evaluation of Distributed Storage Systems
|
3 |
RTP Compatible: Two Models of Video Streaming Over VANETsFang, Zhifei January 2014 (has links)
Because Vehicular Ad Hoc Networks (VANETs) often have a high packet loss rate, the formerly used protocol for video streaming, Real-time Transport Protocol (RTP), is no longer suitable for this specific environment. Previous conducted research has offered many new protocols to solve this problem; however, most of them cannot make full use of the existing Internet video streaming resources like RTP servers.
Our work proposes two models to solve this compatibility issue. The first model is called the converter model. Based on this model, we first modify RTP using Erasure Coding (EC) technique in order to adapt it to the high packet loss rate of VANETs. This newly developed protocol is called EC-RTP. And, we then developed two converters. The first converter stands on the boundary between the Internet and VANETs. It receives the RTP packets which sent from Internet. And then it translates them to the EC-RTP packets. These packets are transported over the VANETs. The second converter receives these EC-RTP packets, translates them back to the RTP packets. It then sends them to the RTP player, so that the RTP player can play these packets. To make EC-RTP can carry more kinds of video streams other than RTP, we proposed a second model. The second model is called the redundancy tunnel. Based on this model, we let the protocol between the two converters carry RTP protocol as its payload. We use the same technique as we have used to modify RTP. At last, we did some experiments with Android tablets. The experiment results show our solution can use the same player to play the same video resources as RTP does. However, unlike RTP, it can reduce packet loss rate.
|
4 |
Enhancing the Performance of Relay Networks with Network CodingMelvin, Scott Harold 02 August 2012 (has links)
This dissertation examines the design and application of network coding (NC)
strategies to enhance the performance of communication networks. With its ability to
combine information packets from different, previously independent data flows, NC
has the potential to improve the throughput, reduce delay and increase the power
efficiency of communication systems in ways that have not yet been fully utilized
given the current lack of processing power at relay nodes. With these motivations in
mind, this dissertation presents three main contributions that employ NC to improve
the efficiency of practical communication systems.
First, the integration of NC and erasure coding (EC) is presented in the context
of wired networks. While the throughput gains from utilizing NC have been demonstrated, and EC has been shown to be an efficient means of reducing packet loss, these have generally been done independently. This dissertation presents innovative methods to combine these two techniques through cross-layer design methodologies.
Second, three methods to reduce or limit the delay introduced by NC when deployed
in networks with asynchronous traffic are developed. Also, a novel opportunistic
approach of applying EC for improved data reliability is designed to take advantage
of unused opportunities introduced by the delay reduction methods proposed.
Finally, computationally efficient methods for the selection of relay nodes and the
assignment of transmit power values to minimize the total transmit power consumed
in cooperative relay networks with NC are developed. Adaptive power allocation is
utilized to control the formation of the network topology to maximize the efficiency
of the NC algorithm.
This dissertation advances the efficient deployment of NC through its integration
with other algorithms and techniques in cooperative communication systems within
the framework of cross-layer protocol design. The motivation is that to improve
the performance of communication systems, relay nodes will need to perform more
intelligent processing of data units than traditional routing. The results presented in
this work are applicable to both wireless and wired networks with real-time traffic
which exist in such systems ranging from cellular and ad-hoc networks to fixed optical
networks.
|
5 |
On Message Fragmentation, Coding and Social Networking in Intermittently Connected NetworksAltamimi, Ahmed B. 23 October 2014 (has links)
An intermittently connected network (ICN) is defined as a mobile network that
uses cooperation between nodes to facilitate communication. This cooperation consists
of nodes carrying messages from other nodes to help deliver them to their destinations.
An ICN does not require an infrastructure and routing information is not
retained by the nodes. While this may be a useful environment for message dissemination, it creates routing challenges. In particular, providing satisfactory delivery
performance while keeping the overhead low is difficult with no network infrastructure
or routing information. This dissertation explores solutions that lead to a high
delivery probability while maintaining a low overhead ratio. The efficiency of message
fragmentation in ICNs is first examined. Next, the performance of the routing
is investigated when erasure coding and network coding are employed in ICNs. Finally,
the use of social networking in ICNs to achieve high routing performance is
considered.
The aim of this work is to improve the better delivery probability while maintaining
a low overhead ratio. Message fragmentation is shown to improve the CDF of
the message delivery probability compared to existing methods. The use of erasure
coding in an ICN further improve this CDF. Finally, the use of network coding was
examined. The advantage of network coding over message replication is quantified in terms of the message delivery probability. Results are presented which show that
network coding can improve the delivery probability compared to using just message
replication. / Graduate / 0544 / 0984 / ahmedbdr@engr.uvic.ca
|
6 |
Designing High-Performance Erasure Coding Schemes for Next-Generation Storage SystemsHaiyang, Shi January 2020 (has links)
No description available.
|
7 |
Protocoles de transport pour la diffusion vidéo temps-réel sur lien sans-fil / Enhanced Transport Protocols for Real Time and Streaming Applications on Wireless LinksSarwar, Golam 09 July 2014 (has links)
Les communications multimedia à forte contrainte de délai dominent de plus en plus l'Internet et supportent principalement des services interactifs et de diffusion de contenus multimedia. En raison de la prolifération des périphériques mobiles, les liaisons sans-fil prennent une part importante dans la transmission de ces données temps réel. Un certain nombre de questions doivent cependant être abordées afin d'obtenir une qualité de service acceptable pour ces communications dans ces environnements sans-fil. En outre, la disponibilité de multiples interfaces sans-fil sur les appareils mobiles offre une opportunité d'améliorer la communication tout en exacerbant encore les problèmes déjà présents sur les liaisons sans-fil simples. Pour faire face aux problèmes d'erreurs et de délai de ces liaisons, cette thèse propose deux améliorations possibles. Tout d'abord, une technique d'amélioration pour le protocole de transport multimedia Datagram Congestion Control Protocol (DCCP/CCID4) pour lien long délai (par exemple, lien satellite) qui améliore significativement les performances du transport de la voix sur IP (VoIP). En ce qui concerne les erreurs de lien et le multi-chemin, cette thèse propose et évalue un code à effacement adapté au protocole Stream Transport Control Protocol (CMT-SCTP). Enfin, un outil d'évaluation en ligne de streaming de qualité vidéo, implémentant une méthode cross-layer d'évaluation de la qualité vidéo en temps-réel pour encodeur H.264 est proposé à la communauté réseau en open-source. / Real time communications have, in the last decade, become a highly relevant component of Internet applications and services, with both interactive (voice and video) communications and streamed content being used by people in developed and developing countries alike. Due to the proliferation of mobile devices, wireless media is becoming the means of transmitting a large part of this increasingly important real time communications traffic. Wireless has also become an important technology in developing countries (and remote areas of developed countries), with satellite communications being increasingly deployed for traffic backhaul and ubiquitous connection to the Internet. A number of issues need to be addressed in order to have an acceptable service quality for real time communications in wireless environments. In addition to this, the availability of multiple wireless interfaces on mobile devices presents an opportunity to improve and further exacerbates the issues already present on single wireless links. Mitigation of link errors originating from the wireless media, addressing packet reordering, jitter, minimising the link and buffering (required to deal with reordering or jitter) delays, etc. all contribute to lowering user's quality of experience and perception of network quality and usability. Therefore in this thesis, we consider improvements to transport protocols for real time communications and streaming services to address these problems.
|
8 |
Fault Tolerance in Linear Algebraic Methods using Erasure Coded ComputationsXuejiao Kang (5929862) 16 January 2019 (has links)
<p>As
parallel and distributed systems scale to hundreds of thousands of
cores and beyond, fault tolerance becomes increasingly important --
particularly on systems with limited I/O capacity and bandwidth.
Error correcting codes (ECCs) are used in communication systems where
errors arise when bits are corrupted silently in a message. Error
correcting codes can detect and correct erroneous bits. Erasure
codes, an instance of error correcting codes that deal with data
erasures, are widely used in storage systems. An erasure code
addsredundancy to the data to tolerate erasures.
</p>
<p><br>
</p>
<p>In
this thesis, erasure coded computations are proposed as a novel
approach to dealing with processor faults in parallel and distributed
systems. We first give a brief review of traditional fault tolerance
methods, error correcting codes, and erasure coded storage. The
benefits and challenges of erasure coded computations with respect to
coding scheme, fault models and system support are also presented.</p>
<p><br>
</p>
<p>In
the first part of my thesis, I demonstrate the novel concept of
erasure coded computations for linear system solvers. Erasure coding
augments a given problem instance with redundant data. This augmented
problem is executed in a fault oblivious manner in a faulty parallel
environment. In the event of faults, we show how we can compute the
true solution from potentially fault-prone solutions using a
computationally inexpensive procedure. The results on diverse linear
systems show that our technique has several important advantages: (i)
as the hardware platform scales in size and in number of faults, our
scheme yields increasing improvement in resource utilization,
compared to traditional schemes; (ii) the proposed scheme is easy to
code as the core algorithm remains the same; (iii) the general scheme
is flexible to accommodate a range of computation and communication
trade-offs.
</p>
<p><br>
</p>
<p>We
propose a new coding scheme for augmenting the input matrix that
satisfies the recovery equations of erasure coding with high
probability in the event of random failures. This coding scheme also
minimizes fill (non-zero elements introduced by the coding block),
while being amenable to efficient partitioning across processing
nodes. Our experimental results show that the scheme adds minimal
overhead for fault tolerance, yields excellent parallel efficiency
and scalability, and is robust to different fault arrival models and
fault rates.</p>
<p><br>
</p>
<p>Building
on these results, we show how we can minimize, to optimality, the
overhead associated with our problem augmentation techniques for
linear system solvers. Specifically, we present a technique that
adaptively augments the problem only when faults are detected. At any
point during execution, we only solve a system with the same size as
the original input system. This has several advantages in terms of
maintaining the size and conditioning of the system, as well as in
only adding the minimal amount of computation needed to tolerate the
observed faults. We present, in details, the augmentation process,
the parallel formulation, and the performance of our method.
Specifically, we show that the proposed adaptive fault tolerance
mechanism has minimal overhead in terms of FLOP counts with respect
to the original solver executing in a non-faulty environment, has
good convergence properties, and yields excellent parallel
performance.</p>
<p><br>
</p>
<p>Based
on the promising results for linear system solvers, we apply the
concept of erasure coded computation to eigenvalue problems, which
arise in many applications including machine learning and scientific
simulations. Erasure coded computation is used to design a fault
tolerant eigenvalue solver. The original eigenvalue problem is
reformulated into a generalized eigenvalue problem defined on
appropriate augmented matrices. We present the augmentation scheme,
the necessary conditions for augmentation blocks, and the proofs of
equivalence of the original eigenvalue problem and the reformulated
generalized eigenvalue problem. Finally, we show how the eigenvalues
can be derived from the augmented system in the event of faults.
</p>
<p><br>
</p>
<p>We
present detailed experiments, which demonstrate the excellent
convergence properties of our fault tolerant TraceMin eigensolver in
the average case. In the worst case where the row-column pairs that
have the most impact on eigenvalues are erased, we present a novel
scheme that computes the augmentation blocks as the computation
proceeds, using the estimates of leverage scores of row-column pairs
as they are computed by the iterative process. We demonstrate low
overhead, excellent scalability in terms of the number of faults, and
the robustness to different fault arrival models and fault rates for
our method.</p>
<p><br>
</p>
<p>In
summary, this thesis presents a novel approach to fault tolerance
based on erasure coded computations, demonstrates it in the
context of important linear algebra kernels, and validates its
performance on a diverse set of problems on scalable parallel
computing platforms. As parallel systems scale to hundreds of
thousands of processing cores and beyond, these techniques present
the most scalable fault tolerant mechanisms currently available.</p><br>
|
9 |
Sauvegarde des données dans les réseaux P2P / Data Backup in P2P NetworksTout, Rabih 25 June 2010 (has links)
Aujourd’hui la sauvegarde des données est une solution indispensable pour éviter de les perdre. Plusieurs méthodes et stratégies de sauvegarde existent et utilisent différents types de support. Les méthodes de sauvegarde les plus efficaces exigent souvent des frais d’abonnement au service liés aux coûts du matériel et de l’administration investis par les fournisseurs. Le grand succès des réseaux P2P et des applications de partage de fichiers a rendu ces réseaux exploitables par un grand nombre d’applications surtout avec la possibilité de partager les ressources des utilisateurs entre eux. Les limites des solutions de sauvegarde classiques qui ne permettent pas le passage à l’échelle ont rendu les réseaux P2P intéressants pour les applications de sauvegarde. L’instabilité dans les réseaux P2P due au taux élevé de mouvement des pairs, rend la communication entre les pairs très difficile. Dans le contexte de la sauvegarde, la communication entre les nœuds est indispensable, ce qui exige une grande organisation dans le réseau. D’autre part, la persistance des données sauvegardées dans le réseau reste un grand défi car la sauvegarde n’a aucun intérêt si les données sauvegardées sont perdues et que la restauration devient impossible. L’objectif de notre thèse est d’améliorer l’organisation des sauvegardes dans les réseaux P2P et de garantir la persistance des données sauvegardées. Nous avons donc élaboré une approche de planification qui permet aux nœuds de s’organiser dans le but de mieux communiquer entre eux. D’autre part, pour garantir la persistance des données sauvegardées, nous avons proposé une approche de calcul probabiliste qui permet de déterminer, selon les variations dans le système, le nombre de répliques nécessaire pour qu’au moins une copie persiste dans le système après un certain temps défini. Nos approches ont été implémentées dans une application de sauvegarde P2P. / Nowadays, data backup is an essential solution to avoid losing data. Several backup methods and strategies exist. They use different media types. Most efficient backup methods are not free due to the cost of hardware and administration invested by suppliers. The great success of P2P networks and file sharing applications has encouraged the use of these networks in multiple applications especially with the possibility of sharing resources between network users. The limitations of traditional backup solutions in large scale networks have made P2P networks an interesting solution for backup applications. Instability in P2P networks due to peers’ high movement rate makes communication between these peers very difficult. To achieve data backup, communications between peers is essential and requires a network organization. On the other hand, the persistence of backed up data in the network remains a major challenge. Data backup is useless if backed up copies are lost. The objective of this thesis is to improve the backup organization and ensure backed up data persistence in P2P networks. We have therefore developed a planning approach that allows nodes to organize themselves in order to better communicate with each other. On the other hand, to ensure data persistency, we proposed a probabilistic approach to compute the minimum replicas number needed for a given data so that at least one copy remains in the system after a given time. Our two approaches have been implemented in a P2P backup application.
|
10 |
A QUANTITATIVE FRAMEWORK FOR CDN-BASED OVER-THE-TOP VIDEO STREAMING SYSTEMSAbubakr O Alabbasi (8187867) 06 January 2020 (has links)
<div>The demand for global video has been burgeoning across industries. With the expansion and improvement of video-streaming services, cloud-based video is evolving into a necessary feature of any successful business for reaching internal and external audiences. Over-the-top (OTT) video streaming, e.g., Netfix and YouTube, has been dominating the global IP traffic in recent years. More than 50% of OTT video traffic are now delivered through content distribution networks (CDNs). Even though multiple solutions have been proposed for improving congestion in the CDN system, managing the ever-increasing traffic requires a fundamental understanding of the system and the different design fexibilities (control knobs) to make the best use of the hardware limitations. In Addition, there is no analytical understanding for the key quality of experience (QoE) attributes (stall duration, average quality, etc.) for video streaming when transmitted using CDN-based multi-tier infrastructure, which is the focus of this thesis. The key contribution of this thesis is to provide a white-box analytical understanding of the key QoE attributes of the enduser in cloud storage systems, which can be used to systematically address the choppy user experience and have optimized system designs. The rst key design involves the scheduling strategy, that chooses the subset of CDN servers to obtain the content. The second key design involves the quality of each video chunk. The third key design involves deciding which contents to cache at the edge routers and which content needs to be stored at the CDN. Towards solving these challenges, this dissertation is divided into three parts. Part 1 considers video streaming over distributed systems where the video segments are encoded using an erasure code for better reliability. Part 2 looks at the problem of optimizing the tradeoff between quality and stall of the streamed videos. In Part 3, we consider caching partial contents of the videos at the CDN as well as at the edge-routers to further optimize video streaming services. We present a model for describing a today's representative multi-tier system architecture</div><div>for video streaming applications, typically composed of a centralized origin server, several CDN sites and edge-caches. Our model comprehensively considers the following factors: limited caching spaces at the CDN sites and edge-routers, allocation of CDN for a video request, choice of different ports from the CDN, and the central storage and bandwidth allocation. With this model, we optimize different quality of experience (QoE) measures and present novel, yet efficient, algorithms to solve the formulated optimization problems. Our extensive simulation results demonstrate that the proposed algorithms signicantly outperform the state-of-the-art strategies. We take one step further and implement a small-scale video streaming system in a real cloud environment, managed by Openstack, and validate our results</div>
|
Page generated in 0.0518 seconds