Spelling suggestions: "subject:"errorcorrecting modes"" "subject:"errorcorrecting codes""
321 |
Space-efficient data sketching algorithms for network applicationsHua, Nan 06 July 2012 (has links)
Sketching techniques are widely adopted in network applications. Sketching algorithms “encode” data into succinct data structures that can later be accessed and “decoded” for various purposes, such as network measurement, accounting, anomaly detection and etc. Bloom filters and counter braids are two well-known representatives in this category. Those sketching algorithms usually need to strike a tradeoff between performance (how much information can be revealed and how fast) and cost (storage, transmission and computation). This dissertation is dedicated to the
research and development of several sketching techniques including improved forms of stateful Bloom Filters, Statistical Counter Arrays and Error Estimating Codes. Bloom filter is a space-efficient randomized data structure for approximately representing a set in order to support membership queries. Bloom filter and its variants have found widespread use in many networking applications, where it is important to minimize the cost of storing and communicating network data. In this thesis, we propose a family of Bloom Filter variants augmented by rank-indexing method. We will show such augmentation can bring a significant reduction of space and also the number of memory accesses, especially when deletions of set elements from the Bloom Filter need to be supported. Exact active counter array is another important building block in many sketching algorithms, where storage cost of the array is of paramount concern. Previous approaches reduce the storage costs while either losing accuracy or supporting only passive measurements. In this thesis, we propose an exact statistics counter array architecture that can support active measurements (real-time read and write). It also leverages the aforementioned rank-indexing method and exploits statistical multiplexing to minimize the storage
costs of the counter array. Error estimating coding (EEC) has recently been established as an important tool to estimate bit error rates in the transmission of packets over wireless links. In essence, the EEC problem is also a sketching problem, since the EEC codes can be viewed as a sketch of the packet sent, which is decoded by the receiver to estimate bit error rate. In this thesis, we will first investigate the asymptotic bound of error estimating coding by viewing the problem from two-party computation perspective and then investigate its coding/decoding efficiency using Fisher information analysis. Further, we develop several sketching techniques including Enhanced tug-of-war(EToW) sketch and the generalized EEC (gEEC)sketch family which can achieve around 70% reduction of sketch size with similar estimation accuracies. For all solutions proposed above, we will use theoretical tools such as information theory and communication complexity to investigate how far our proposed solutions are away from the theoretical optimal. We will show that the proposed techniques are asymptotically or empirically very close to the theoretical bounds.
|
322 |
Designing Secure and Robust Distribted and Pervasive Systems with Error Correcting CodesPaul, Arnab 11 February 2005 (has links)
This thesis investigates the role of error-correcting codes in
Distributed and Pervasive Computing. The main results are at the
intersection of Security and Fault Tolerance for these
environments. There are two primary areas that are explored in this
thesis.
1. We have investigated protocols for large scale fault tolerant
secure distributed storage. The two main concerns here are security
and redundancy. In one arm of this research we developed SAFE, a
distributed storage system based on a new protocol that offers a
two-in-one solution to fault-tolerance and confidentiality. This
protocol is based on cryptographic properties of error correction
codes. In another arm, we developed esf, another prototype
distributed persistent storage; esf facilitates seamless hardware
extension of storage units, high resilience to loads and provides
high availability. The main ingredient in its design is a modern
class of erasure codes known as the {em Fountain Codes}. One
problem in such large storage is the heavy overhead of the
associated fingerprints needed for checking data integrity. esf
deploys a clever integrity check mechanism by use of a data
structure known as the {em Merkle Tree} to address this issue.
2. We also investigated the design of a new remote
authentication protocol. Applications over long range wireless would
benefit quite a bit from this design. We designed and implemented
LAWN, a lightweight remote authentication protocol for wireless
networks that deploys a randomized approximation scheme based on
Error correcting codes. We have evaluated in detail the performance
of LAWN; while it adds very low overhead of computation, the savings
in bandwidth and power are quite dramatic.
|
323 |
Applications of Random Graphs to Design and Analysis of LDPC Codes and Sensor Networks19 August 2005 (has links)
This thesis investigates a graph and information theoretic approach to design and analysis of low-density parity-check (LDPC) codes and wireless networks. In this work, both LDPC codes and wireless networks are considered as random graphs. This work proposes solutions to important theoretic and practical open problems in LDPC coding, and for the first time introduces a framework for analysis of finite wireless networks.
LDPC codes are considered to be one of the best classes of error-correcting codes. In this thesis, several problems in this area are studied. First, an improved decoding algorithm for LDPC codes is introduced. Compared to the standard iterative decoding, the proposed decoding algorithm can result in several orders of magnitude lower bit error rates, while having almost the same complexity. Second, this work presents a variety of bounds on the achievable performance of different LDPC coding scenarios. Third, it studies rate-compatible LDPC codes and provides fundamental properties of these codes. It also shows guidelines for optimal design of rate-compatible codes. Finally, it studies non-uniform and unequal error protection using LDPC codes and explores their applications to data storage systems and communication networks. It presents a new error-control scheme for volume holographic memory (VHM) systems and shows that the new method can increase the storage capacity by more than fifty percent compared to previous schemes.
This work also investigates the application of random graphs to the design and analysis of wireless ad hoc and sensor networks. It introduces a framework for analysis of finite wireless networks. Such framework was lacking from the literature. Using the framework, different network properties such as capacity, connectivity, coverage, and routing and security algorithms are studied. Finally, connectivity properties of large-scale sensor networks are investigated. It is shown how unreliability of sensors, link failures, and non-uniform distribution of nodes affect the connectivity of sensor networks.
|
324 |
Applications of Random Graphs to Design and Analysis of LDPC Codes and Sensor NetworksPishro-Nik, Hossein 12 1900 (has links)
This thesis investigates a graph and information theoretic approach to design and analysis of low-density parity-check (LDPC) codes and wireless networks. In this work, both LDPC codes and wireless networks are considered as random graphs. This work proposes solutions to important theoretic and practical open problems in LDPC coding, and for the first time introduces a framework for analysis of finite wireless networks. LDPC codes are considered to be one of the best classes of error-correcting codes. In this thesis, several problems in this area are studied. First, an improved decoding algorithm for LDPC codes is introduced. Compared to the standard iterative decoding, the proposed decoding algorithm can result in several orders of magnitude lower bit error rates, while having almost the same complexity. Second, this work presents a variety of bounds on the achievable performance of different LDPC coding scenarios. Third, it studies rate-compatible LDPC codes and provides fundamental properties of these codes. It also shows guidelines for optimal design of rate-compatible codes. Finally, it studies non-uniform and unequal error protection using LDPC codes and explores their applications to data storage systems and communication networks. It presents a new error-control scheme for volume holographic memory (VHM) systems and shows that the new method can increase the storage capacity by more than fifty percent compared to previous schemes. This work also investigates the application of random graphs to the design and analysis of wireless ad hoc and sensor networks. It introduces a framework for analysis of finite wireless networks. Such framework was lacking from the literature. Using the framework, different network properties such as capacity, connectivity, coverage, and routing and security algorithms are studied. Finally, connectivity properties of large-scale sensor networks are investigated. It is shown how unreliability of sensors, link failures, and non-uniform distribution of nodes affect the connectivity of sensor networks.
|
325 |
Applications of graph-based codes in networks: analysis of capacity and design of improved algorithmsVellambi, Badri Narayanan 25 August 2008 (has links)
The conception of turbo codes by Berrou et al. has created a renewed interest in modern graph-based codes. Several encouraging results that have come to light since then have fortified the role these codes shall play as potential solutions for present and future communication problems.
This work focuses on both practical and theoretical aspects of graph-based codes. The
thesis can be broadly categorized into three parts. The first part of the thesis focuses on
the design of practical graph-based codes of short lengths. While both low-density parity-check
codes and rateless codes have been shown to be asymptotically optimal under the message-passing (MP) decoder, the performance of short-length codes from these families under MP decoding is starkly sub-optimal. This work first addresses the
structural characterization of stopping sets to understand this sub-optimality. Using this
characterization, a novel improved decoder that offers several orders of magnitude improvement in bit-error rates is introduced. Next, a novel scheme for the design of a good rate-compatible family of punctured codes is proposed.
The second part of the thesis aims at establishing these codes as a good tool to develop
reliable, energy-efficient and low-latency data dissemination schemes in networks. The problems of broadcasting in wireless multihop networks and that of unicast in delay-tolerant networks are investigated. In both cases, rateless coding is seen to offer an elegant means of achieving the goals of the chosen communication protocols. It was noticed that the ratelessness and the randomness in encoding process make this scheme
specifically suited to such network applications.
The final part of the thesis investigates an application of a specific class of codes called
network codes to finite-buffer wired networks. This part of the work aims at establishing a framework for the theoretical study and understanding of finite-buffer networks. The
proposed Markov chain-based method extends existing results to develop an iterative
Markov chain-based technique for general acyclic wired networks. The framework not only estimates the capacity of such networks, but also provides a means to monitor network traffic and packet drop rates on various links of the network.
|
326 |
Clustering For Designing Error Correcting CodesJoseph, Binoy 06 1900 (has links)
In this thesis we address the problem of designing codes for specific applications. To do so we make use of the relationship between clusters and codes. Designing a block code over any finite dimensional space may be thought of as forming the corresponding number of clusters over the particular dimensional space. In literature we have a number of algorithms available for clustering. We have examined the performance of a number of such algorithms, such as Linde-Buzo-Gray, Simulated Annealing, Simulated Annealing with Linde-Buzo-Gray, Deterministic Annealing, etc, for design of codes. But all these algorithms make use of the Eucledian squared error distance measure for clustering. This distance measure does not match with the distance measure of interest in the error correcting scenario, namely, Hamming distance. Consequently we have developed an algorithm that can be used for clustering with Hamming distance as the distance measure. Also, it has been observed that stochastic algorithms, such as Simulated Annealing fail to produce optimum codes due to very slow convergence near the end. As a remedy, we have proposed a modification based on the code structure, for such algorithms for code design which makes it possible to converge to the optimum codes.
|
327 |
Turbo Decoding With Early State DecisionsLindblom, Johannes January 2008 (has links)
<p>Turbo codes was first presented in 1993 by C. Berrou, A. Glavieux and P. Thitimajshima. Since then this class of error correcting codes has become one of the most popular, because of its good properties. The turbo codes are able to come very close to theoretical limit, the Shannon limit. Turbo codes are for example used in the third generation of mobile phone (3G) and in the standard IEEE 802.16 (WiMAX).</p><p>There are some drawbacks with the algorithm for decoding turbo codes. The deocoder uses a Maximum A Posteriori (MAP) algorithm, which is a complex algorith. Because of the use of many variables in the decoder the decoding circuit will consume a lot of power due to memory accesses and internal communication. One way in which this can be reduced is to make early decisions.</p><p>In this work I have focused on making early decision of the encoder states. One major part of the work was also to be sure that the expressions were written in a way that as few variables as possible are needed. A termination condition is also introduced. Simulations based on estimations of the number of memory accesses, shows that the number of memory accesses will significantly decrease.</p>
|
328 |
Αλγόριθμοι επαναληπτικής αποκωδικοποίησης κωδικών LDPC και μελέτη της επίδρασης του σφάλματος κβαντισμού στην απόδοση του αλγορίθμου Log Sum-ProductΚάνιστρας, Νικόλαος 25 May 2009 (has links)
Οι κώδικες LDPC ανήκουν στην κατηγορία των block κωδικών. Πρόκειται για κώδικες ελέγχου σφαλμάτων μετάδοσης και πιο συγκεκριμένα για κώδικες διόρθωσης σφαλμάτων. Αν και η εφεύρεσή τους (από τον Gallager) τοποθετείται χρονικά στις αρχές της δεκαετίας του 60, μόλις τα τελευταία χρόνια κατάφεραν να κεντρίσουν το έντονο ενδιαφέρον της επιστημονικής-ερευνητικής κοινότητας για τις αξιόλογες επιδόσεις τους. Πρόκειται για κώδικες ελέγχου ισοτιμίας με κυριότερο χαρακτηριστικό τον χαμηλής πυκνότητας πίνακα ελέγχου ισοτιμίας (Low Density Parity Check) από τον οποίο και πήραν το όνομά τους. Δεδομένου ότι η κωδικοποίηση των συγκεκριμένων κωδικών είναι σχετικά απλή, η αποκωδικοποίηση τους είναι εκείνη η οποία καθορίζει σε μεγάλο βαθμό τα χαρακτηριστικά του κώδικα που μας ενδιαφέρουν, όπως είναι η ικανότητα διόρθωσης σφαλμάτων μετάδοσης (επίδοση) και η καταναλισκόμενη ισχύς. Για το λόγο αυτό έχουν αναπτυχθεί διάφοροι αλγόριθμοι αποκωδικοποίησης, οι οποίοι είναι επαναληπτικοί. Παρόλο που οι ανεπτυγμένοι αλγόριθμοι και οι διάφορες εκδοχές τους δεν είναι λίγοι, δεν έχει ακόμα καταστεί εφικτό να αναλυθεί θεωρητικά η επίδοσή τους.
Στην παρούσα εργασία παρατίθενται οι κυριότεροι αλγόριθμοι αποκωδικοποίησης κωδικών LDPC, που έχουν αναπτυχθεί μέχρι σήμερα. Οι αλγόριθμοι αυτοί υλοποιούνται και συγκρίνονται βάσει των αποτελεσμάτων εξομοιώσεων. Ο πιο αποδοτικός από αυτούς είναι ο αποκαλούμενος αλγόριθμος log Sum-Product και στηρίζει σε μεγάλο βαθμό την επίδοσή του σε μία αρκετά πολύπλοκή συνάρτηση, την Φ(x). Η υλοποίηση της τελευταίας σε υλικό επιβάλλει την πεπερασμένη ακρίβεια αναπαράστασής της, δηλαδή τον κβαντισμό της. Το σφάλμα κβαντισμού που εισάγεται από την διαδικασία αυτή θέτει ένα όριο στην επίδοση του αλγορίθμου. Η μελέτη που έγινε στα πλαίσια της εργασίας οδήγησε στον προσδιορισμό δύο μηχανισμών εισαγωγής σφάλματος κβαντισμού στον αλγόριθμο log Sum-Product και στη θεωρητική έκφραση της πιθανότητας εμφάνισης κάθε μηχανισμού κατά την πρώτη επανάληψη του αλγορίθμου.
Μελετήθηκε επίσης ο τρόπος με τον οποίο το εισαγόμενο σφάλμα κβαντισμού επιδρά στην απόφαση του αλγορίθμου στο τέλος της κάθε επανάληψης και αναπτύχθηκε ένα θεωρητικό μοντέλο αυτού του μηχανισμού. Το θεωρητικό μοντέλο δίνει την πιθανότητα αλλαγής απόφασης του αλγορίθμου λόγω του σφάλματος κβαντισμού της συνάρτησης Φ(x), χωρίς όμως να είναι ακόμα πλήρες αφού βασίζεται και σε πειραματικά δεδομένα. Η ολοκλήρωση του μοντέλου, ώστε να είναι πλήρως θεωρητικό, θα μπορούσε να αποτελέσει αντικείμενο μελλοντικής έρευνας, καθώς θα επιτρέψει τον προσδιορισμό του περιορισμού της επίδοσης του αλγορίθμου για συγκεκριμένο σχήμα κβαντισμού της συνάρτησης, αποφεύγοντας χρονοβόρες εξομοιώσεις. / Low-Density Parity-Check (LDPC) codes belong to the category of Linear Block Codes. They are error detection and correction codes. Although LDPC codes have been proposed by R. Gallager since 1962, they were scarcely considered in the 35 years that followed. Only in the end-90's they were rediscovered due to their decoding performance that approaches Shannon limit. As their name indicates they are parity check codes whose parity check matrix is sparse. Since the encoding process is simple, the decoding procedure determines the performance and the consumed power of the decoder. For this reason several iterative decoding algorithms have been developed. However theoretical determination of their performance has not yet been feasible.
This work presents the most important iterative decoding algorithms for LDPC codes, that have been developed to date. These algorithms are implemented in matlab and their performance is studied through simulation. The most powerful among them, namely Log Sum-Product, uses a very nonlinear function called Φ(x). Hardware implementation of this function enforces finite accuracy, due to finite word length representation. The roundoff error that this procedure imposes, impacts the decoding performance by means of two mechanisms. Both mechanisms are analyzed and a theoretical expression for each mechanism activation probability, at the end of the first iteration of the algorithm, is developed.
The impact of the roundoff error on the decisions taken by the log Sum-Product decoding algorithm at the end of each iteration is also studied. The mechanism by means of which roundoff alters the decisions of a finite word length implementation of the algorithm compared to the infinite precision case, is analyzed and a corresponding theoretical model is developed. The proposed model computes the probability of changing decisions due to finite word length representation of Φ(x), but it is not yet complete, since the determination of the corresponding parameters is achieved through experimental results. Further research focuses on the completion of the theoretical model, since it can lead to a tool that computes the expected degradation of the decoding performance for a particular implementation of the decoder, without the need of time-consuming simulations.
|
329 |
Turbo Decoding With Early State DecisionsLindblom, Johannes January 2008 (has links)
Turbo codes was first presented in 1993 by C. Berrou, A. Glavieux and P. Thitimajshima. Since then this class of error correcting codes has become one of the most popular, because of its good properties. The turbo codes are able to come very close to theoretical limit, the Shannon limit. Turbo codes are for example used in the third generation of mobile phone (3G) and in the standard IEEE 802.16 (WiMAX). There are some drawbacks with the algorithm for decoding turbo codes. The deocoder uses a Maximum A Posteriori (MAP) algorithm, which is a complex algorith. Because of the use of many variables in the decoder the decoding circuit will consume a lot of power due to memory accesses and internal communication. One way in which this can be reduced is to make early decisions. In this work I have focused on making early decision of the encoder states. One major part of the work was also to be sure that the expressions were written in a way that as few variables as possible are needed. A termination condition is also introduced. Simulations based on estimations of the number of memory accesses, shows that the number of memory accesses will significantly decrease.
|
330 |
VLSI Implementation of Digital Signal Processing Algorithms for MIMO Detection and Channel Pre-processingPatel, Dimpesh 16 September 2011 (has links)
The efficient high-throughput VLSI implementation of Soft-output MIMO detectors for high-order constellations and large antenna configurations has been a major challenge in the literature. This thesis introduces a novel Soft-output K-Best scheme that improves BER performance and reduces the computational complexity significantly by using three major improvement ideas. It also presents an area and power efficient VLSI implementation of a 4x4 64-QAM Soft K-Best MIMO detector that attains the highest detection throughput of 2 Gbps and second lowest energy/bit reported in the literature, fulfilling the aggressive requirements of emerging 4G standards such as IEEE 802.16m and LTE-Advanced. A low-complexity and highly parallel algorithm for QR Decomposition, an essential channel pre-processing task, is also developed that uses 2D, Householder 3D and 4D Givens Rotations. Test results for the QRD chip, fabricated in 0.13um CMOS, show that it attains the lowest reported latency of 144ns and highest QR Processing Efficiency.
|
Page generated in 0.3683 seconds