Spelling suggestions: "subject:"erasure modes"" "subject:"erasure codes""
1 |
Power-benefit analysis of erasure encoding with redundant routing in sensor networks.Vishwanathan, Roopa 12 1900 (has links)
One of the problems sensor networks face is adversaries corrupting nodes along the path to the base station. One way to reduce the effect of these attacks is multipath routing. This introduces some intrusion-tolerance in the network by way of redundancy but at the cost of a higher power consumption by the sensor nodes. Erasure coding can be applied to this scenario in which the base station can receive a subset of the total data sent and reconstruct the entire message packet at its end. This thesis uses two commonly used encodings and compares their performance with respect to power consumed for unencoded data in multipath routing. It is found that using encoding with multipath routing reduces the power consumption and at the same time enables the user to send reasonably large data sizes. The experiments in this thesis were performed on the Tiny OS platform with the simulations done in TOSSIM and the power measurements were taken in PowerTOSSIM. They were performed on the simple radio model and the lossy radio model provided by Tiny OS. The lossy radio model was simulated with distances of 10 feet, 15 feet and 20 feet between nodes. It was found that by using erasure encoding, double or triple the data size can be sent at the same power consumption rate as unencoded data. All the experiments were performed with the radio set at a normal transmit power, and later a high transmit power.
|
2 |
Distributed Data Storage System for Data Survivability in Wireless Sensor NetworksAl-Awami, Louai 03 October 2013 (has links)
Wireless Sensor Networks (WSNs) that use tiny wireless devices capable of communicating,
processing, and sensing promise to have applications in virtually all fields.
Smart homes and smart cities are just few of the examples that WSNs can enable.
Despite their potential, WSNs suffer from reliability and energy limitations.
In this study, we address the problem of designing Distributed Data Storage Systems
(DDSSs) for WSNs using decentralized erasure codes. A unique aspect of WSNs
is that their data is inherently decentralized. This calls for a decentralized mechanism
for encoding and decoding. We propose a distributed data storage framework
to increase data survivability in WSNs. The framework utilizes Decentralized Erasure
Codes for Data Survivability (DEC-DS) which allow for determining the amount
of redundancy required in both hardware and data to allow sensed data to survive
failures in the network.
To address the energy limitations, we show two approaches to implement the
proposed solution in an energy efficient manner. The two approaches employ Random
Linear Network Coding (RLNC) to exploit coding opportunities in order to
save energy and in turn prolong network life. A routing based scheme, called DEC
Encode-and-Forward (DEC-EaF), applies to networks with routing capability, while
the second, DEC Encode-and-Disseminate (DEC-EaD), uses a variation of random
walk to build the target code in a decentralized fashion. We also introduce a new
decentralized approach to implement Luby Transform (LT)-Codes based DDSSs. The
scheme is called Decentralized Robust Soliton Storage (DRSS) and it operates in a
decentralized fashion and requires no coordination between sensor nodes.
The schemes are tested through extensive simulations to evaluate their performance.
We also compare the proposed schemes to similar schemes in the literature.
The comparison considers energy efficiency as well as coding related aspects. Using
the proposed schemes can greatly improve the reliability of WSNs especially under
harsh working conditions. / Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2013-09-30 22:43:04.509
|
3 |
Exploration of Erasure-Coded Storage Systems for High Performance, Reliability, and Inter-operabilitySubedi, Pradeep 01 January 2016 (has links)
With the unprecedented growth of data and the use of low commodity drives in local disk-based storage systems and remote cloud-based servers has increased the risk of data loss and an overall increase in the user perceived system latency. To guarantee high reliability, replication has been the most popular choice for decades, because of simplicity in data management. With the high volume of data being generated every day, the storage cost of replication is very high and is no longer a viable approach.
Erasure coding is another approach of adding redundancy in storage systems, which provides high reliability at a fraction of the cost of replication. However, the choice of erasure codes being used affects the storage efficiency, reliability, and overall system performance. At the same time, the performance and interoperability are adversely affected by the slower device components and complex central management systems and operations.
To address the problems encountered in various layers of the erasure coded storage system, in this dissertation, we explore the different aspects of storage and design several techniques to improve the reliability, performance, and interoperability. These techniques range from the comprehensive evaluation of erasure codes, application of erasure codes for highly reliable and high-performance SSD system, to the design of new erasure coding and caching schemes for Hadoop Distributed File System, which is one of the central management systems for distributed storage. Detailed evaluation and results are also provided in this dissertation.
|
4 |
Optimisation de codes correcteurs d’effacements par application de transformées polynomiales / Optimisation of erasure codes by applying polynomial transformsDetchart, Jonathan 05 December 2018 (has links)
Les codes correcteurs d’effacements sont aujourd’hui une solution bien connueutilisée pour fiabiliser les protocoles de communication ou le stockage distribué desdonnées. La plupart de ces codes sont basés sur l’arithmétique des corps finis, définissantl’addition et la multiplication sur un ensemble fini d’éléments, nécessitantsouvent des opérations complexes à réaliser. En raison de besoins en performancetoujours plus importants, ces codes ont fait l’objet de nombreuses recherches dans lebut d’obtenir de meilleures vitesses d’exécution, tout en ayant la meilleure capacitéde correction possible. Nous proposons une méthode permettant de transformer les éléments de certains corps finis en éléments d’un anneau afin d’y effectuer toutes les opérations dans lebut de simplifier à la fois le processus de codage et de décodage des codes correcteursd’effacements, sans aucun compromis sur les capacités de correction. Nous présentonségalement une technique de réordonnancement des opérations, permettant deréduire davantage le nombre d’opérations nécessaires au codage grâce à certainespropriétés propres aux anneaux utilisés. Enfin, nous analysons les performances decette méthode sur plusieurs architectures matérielles, et détaillons une implémentationsimple, basée uniquement sur des instructions xor et s’adaptant beaucoupplus efficacement que les autres implémentations à un environnement d’exécutionmassivement parallèle. / Erasure codes are widely used to cope with failures for nearly all of today’snetworks communications and storage systems. Most of these codes are based onfinite field arithmetic, defining the addition and the multiplication over a set offinite elements. These operations can be very complex to perform. As a matter offact, codes performance improvements are still an up to date topic considering thecurrent data growth explosion. We propose a method to transform the elements of some finite fields into ring elements and perform the operations in this ring to simplify both coding and decoding of erasure codes, without any threshold on the correction capacities.We also present a scheduling technique allowing to reduce the number of operations thanks to some particular properties of the ring structure. Finally, we analyse the performance ofsuch a method considering several hardware architectures and detail a simple implementation, using only xor operations, fully scalable over a multicore environment.
|
5 |
Line networks with erasure codes and network codingSong, Yang 23 August 2012 (has links)
Wireless sensor network plays a significant role in the design of future Smart
Grid, mainly for the purpose of environment monitoring, data acquisition and remote
control. Sensors deployed on the utility poles on the power transmission line are used
to collect environment information and send them to the substations for analysis and
management. However, the transmission is suffered from erasures and errors along
the transmission channels. In this thesis, we consider a line network model proposed
in [1] and [2]. We first analyze several different erasure codes in terms of overhead
and encoding/decoding costs, followed by proposing two different coding schemes for
our line network. To deal with both erasures and errors, we combine the erasure
codes and the traditional error control codes, where an RS code is used as an outer
codes in addition to the erasure codes. Furthermore, an adaptive RS coding scheme
is proposed to improve the overall coding efficiency over all SNR regions. In the end,
we apply network coding with error correction of network errors and erasures and
examine our model from the mathematical perspective. / Graduate
|
6 |
Contributions à la diffusion vidéo : de la fiabilité à l'équité en qualité / Contributions to video delivery : from reliability to quality fairnessTran Thai, Tuan 05 June 2014 (has links)
Cette thèse présente plusieurs contributions permettant d'améliorer la diffusion de vidéos. Dans la première partie de la thèse, nous évaluons et analysons les mécanismes de tolérance d'erreurs (correction et masquage d'erreurs). Les performances de ces différents mécanismes sont évaluées pour des taux de pertes de paquets variables. Les contributions suivantes mettent l'accent sur un code à effacements à la volée, appelée Tetrys, plus efficace que les codes en blocs classiques (Forward Error Correction - FEC). Tout d'abord, nous étudions l'application de Tetrys à la transmission vidéo temps réel par des chemins multiples. Nous proposons un découplage entre l'allocation de la charge et la gestion de la redondance pour Tetrys. Tetrys, lorsqu'il est couplé avec un mécanisme de répartition de charge appelé Encoded Multipath streaming (EMS), surpasse le couplage entre EMS et FEC pour les deux modèles de perte testés (Bernoulli et Gilbert-Elliott) en termes de taux de perte résiduel et de qualiée vidéo. En exploitant la capacité de correction de Tetrys, nous étudions ensuite les performances du décodage tardif où les paquets arrivant en retard sont décodés et utilisés pour stopper la propagation d'erreur. Enfin, une étude plus approfondie sur Tetrys propose un algorithme d'adaptation de la redondance, appelé A-Tetrys, permettant de gérer la dynamique du réseau. Cet algorithme combine des approches réactives et pro-actives pour adapter au mieux le taux de redondance. L'évaluation des performances montre que A-Tetrys parvient à gérer simultanément les variations du taux de perte, du modèle de perte et du délai. Nous étudions ensuite un autre challenge en proposant un nouveau critère d'équité : la qualité vidéo. Nous proposons un algorithme Q-AIMD qui permet un partage équitable de la bande passante en termes de qualité vidéo entre des ux concurrents. Nous présentons un système pour le déploiement de cet algorithme Q-AIMD et nous étudions sa convergence. L'évaluation de différentes métriques de qualité vidéo (PSNR, QP et VQM) montre une diminution importante des écarts de qualité vidéo entre plusieurs flux par rapport à l'approche traditionnelle AIMD qui se base sur le débit. De plus, une autre contribution sur ce sujet propose une nouvelle approche appelée courbe virtuelle. Contrairement à QAIMD qui est une approche décentralisée, la courbe virtuelle est une approche centralisée qui permet à la fois une équité entre les flux vidéo en termes de qualité vidéo et une équité entre les flux non-vidéo et flux vidéo en termes de débit. / This thesis contributes to deal with the challenges in video delivery. In the first part of the thesis, we evaluate and analyse the error tolerance schemes (error protection and error resilience). The analysis shows how an error protection solution (equal and unequal) behaves in a wide range of packet loss rate from less lossy to error prone environments. We then study the performance of error protection schemes against the error resilience tools. Departing from the insights drawn from the evaluation and analysis, our next contributions focus on an on-the-y systematic erasure code, named Tetrys, which outperforms the traditional Forward Error Correction (FEC) approach. First, we study the application of Tetrys to real-time multipath transmission. We propose decoupling between load allocation and redundancy traffic for Tetrys. Tetrys coupled with a load splitting scheme, Encoded Multipath Streaming (EMS), outperforms the coupling between EMS and FEC for both loss patterns (Bernoulli and Gilbert-Elliott) in terms of residual loss rate and video quality. Exploiting the full reliable property of Tetrys, we then study the performance of Tetrys late-decoding (LD) where late-arrival packets are lately decoded and used to stop the error progatation. Finally, a deeper study on Tetrys focuses on an redundancy adaptation algorithm, named A-Tetrys, to cope with the network dynamics. We propose an algorithm which has both reaction and estimation behaviors to adapt its redundancy ratio. The performance evaluation shows that A-Tetrys copes well with the variations of both loss rate, loss pattern and delay. Then, we study another today challenge for a new fairness criterion, the video quality. We propose a Q-AIMD algorithm that enables a fair share in terms of video quality between competing ows. We discuss a system with the control granularity to deploy the Q-AIMD algorithm and study its convergence. The evaluation with different video quality metrics (PSNR, QP and VQM) shows an important diminution in video quality discrepancies between different transmitted ows compared to traditional throughput-based AIMD. Looking from another view, another work focuses on a virtual curve approach. Unlike Q-AIMD which is a decentralized approach, virtual curve is a centralized approach that enables both intra-fairness between video ows in terms of video quality and inter-fairness between non-video and video ows in terms of throughput.
|
7 |
Utilization of forward error correction (FEC) techniques with extensible markup language (XML) schema-based binary compression (XSBC) technologyNorbraten, Terry D. 12 1900 (has links)
Approved for public release, distribution is unlimited / In order to plug-in current open sourced, open standard Java programming technology into the building blocks of the US Navy's ForceNet, first, stove-piped systems need to be made extensible to other pertinent applications and then a new paradigm of adopting extensible and cross-platform open technologies will begin to bridge gaps with old and new weapons systems. The battle-space picture in real time and with as much detail, or as little detail needed is now a current vital requirement. Access to this information via wireless laptop technology is here now. Transmission of data to increase the resolution of that battle-space snapshot will invariably be through noisy links. Noisy links such as found in the shallow water littoral regions of interest will be where Autonomous Underwater and Unmanned Underwater Vehicles (AUVs/UUVs) are gathering intelligence for the sea warrior in need of that intelligence. The battle-space picture built from data transmitted within these noisy and unpredictable acoustic regions demands efficiency and reliability features abstract to the user. To realize this efficiency Extensible Markup Language (XML) Schema-based Binary Compression (XSBC), in combination with Vandermode-based Forward Error Correction (FEC) erasure codes, offer the qualities of efficient streaming of plain text XML documents in a highly compressed form, and a data self-healing capability should there be loss of data during transmission in unpredictable transmission mediums. Both the XSBC and FEC libraries detailed in this thesis are open sourced Java Application Program Interfaces (APIs) that can be readily adapted for extensible, cross-platform applications that will be enhanced by these desired features to add functional capability to ForceNet for the sea warrior to access on demand, at sea and in real-time. These features will be presented in the Autonomous Underwater Vehicle (AUV) Workbench (AUVW) Java-based application that will become a valuable tool for warriors involved with Undersea Warfare (UW). / Lieutenant, United States Navy
|
8 |
Adaptive Resource Allocation for Statistical QoS Provisioning in Mobile Wireless Communications and NetworksDu, Qinghe 2010 December 1900 (has links)
Due to the highly-varying wireless channels over time, frequency, and space
domains, statistical QoS provisioning, instead of deterministic QoS guarantees, has
become a recognized feature in the next-generation wireless networks. In this dissertation,
we study the adaptive wireless resource allocation problems for statistical QoS
provisioning, such as guaranteeing the specified delay-bound violation probability,
upper-bounding the average loss-rate, optimizing the average goodput/throughput,
etc., in several typical types of mobile wireless networks.
In the first part of this dissertation, we study the statistical QoS provisioning for
mobile multicast through the adaptive resource allocations, where different multicast
receivers attempt to receive the common messages from a single base-station sender
over broadcast fading channels. Because of the heterogeneous fading across different
multicast receivers, both instantaneously and statistically, how to design the efficient
adaptive rate control and resource allocation for wireless multicast is a widely cited
open problem. We first study the time-sharing based goodput-optimization problem
for non-realtime multicast services. Then, to more comprehensively characterize the
QoS provisioning problems for mobile multicast with diverse QoS requirements, we
further integrate the statistical delay-QoS control techniques — effective capacity
theory, statistical loss-rate control, and information theory to propose a QoS-driven
optimization framework. Applying this framework and solving for the corresponding optimization problem, we identify the optimal tradeoff among statistical delay-QoS
requirements, sustainable traffic load, and the average loss rate through the adaptive
resource allocations and queue management. Furthermore, we study the adaptive
resource allocation problems for multi-layer video multicast to satisfy diverse statistical
delay and loss QoS requirements over different video layers. In addition,
we derive the efficient adaptive erasure-correction coding scheme for the packet-level
multicast, where the erasure-correction code is dynamically constructed based on multicast
receivers’ packet-loss statuses, to achieve high error-control efficiency in mobile
multicast networks.
In the second part of this dissertation, we design the adaptive resource allocation
schemes for QoS provisioning in unicast based wireless networks, with emphasis
on statistical delay-QoS guarantees. First, we develop the QoS-driven time-slot and
power allocation schemes for multi-user downlink transmissions (with independent
messages) in cellular networks to maximize the delay-QoS-constrained sum system
throughput. Second, we propose the delay-QoS-aware base-station selection schemes
in distributed multiple-input-multiple-output systems. Third, we study the queueaware
spectrum sensing in cognitive radio networks for statistical delay-QoS provisioning.
Analyses and simulations are presented to show the advantages of our proposed
schemes and the impact of delay-QoS requirements on adaptive resource allocations
in various environments.
|
9 |
Distributed P2P Data Backup System / Distributed P2P Data Backup SystemMészáros, István January 2013 (has links)
Tato diplomová práce představuje model a prototyp kooperativního distributivního systému zálohování dat založeném na P2P komunikační síti. Návrh systému umožňuje uživatelům přispět svým lokálním volným místem na disku do systému výměnou za spolehlivé úložiště jejich dat u jiných uživatelů. Představené řešení se snaží splnit požadavky uživatelů na ukládání dat, zároveň však také řeší, jak se vypořádat s mírou nepředvídatelnosti uživatelů ohledně poskytování volného místa. To je prováděno dvěma způsoby - využitím Reed - Solomon kódů a zároveň také tím, že poskytuje možnost nastavení parametrů dostupnosti. Jedním z těchto parametrů je časový rozvrh, který značí, kdy uživatel může nabídnout předvídatelný přínos do systému. Druhý parametr se týká spolehlivosti konkrétního uživatele v rámci jeho slíbeného časového úseku. Systém je schopen najít synchronizaci ukládaných dat na základě těchto parametrů. Práce se zaměřuje rovněž na řešení zabezpečení systému proti širšímu spektru možných útoků. Hlavním cílem je publikovat koncept a prototyp. Jelikož se jedná o relativně nové řešení, je důležitá také zpětná vazba od široké veřejnosti, která může produkt používat. Právě jejich komentáře a připomínky jsou podnětem pro další vývoj systému.
|
Page generated in 0.0599 seconds