• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 105
  • 42
  • 29
  • 18
  • 7
  • 6
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 252
  • 134
  • 56
  • 54
  • 53
  • 51
  • 50
  • 46
  • 46
  • 45
  • 42
  • 40
  • 34
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Advanced Coding Techniques with Applications to Storage Systems

Nguyen, Phong Sy 2012 May 1900 (has links)
This dissertation considers several coding techniques based on Reed-Solomon (RS) and low-density parity-check (LDPC) codes. These two prominent families of error-correcting codes have attracted a great amount of interest from both theorists and practitioners and have been applied in many communication scenarios. In particular, data storage systems have greatly benefited from these codes in improving the reliability of the storage media. The first part of this dissertation presents a unified framework based on rate-distortion (RD) theory to analyze and optimize multiple decoding trials of RS codes. Finding the best set of candidate decoding patterns is shown to be equivalent to a covering problem which can be solved asymptotically by RD theory. The proposed approach helps understand the asymptotic performance-versus-complexity trade-off of these multiple-attempt decoding algorithms and can be applied to a wide range of decoders and error models. In the second part, we consider spatially-coupled (SC) codes, or terminated LDPC convolutional codes, over intersymbol-interference (ISI) channels under joint iterative decoding. We empirically observe the phenomenon of threshold saturation whereby the belief-propagation (BP) threshold of the SC ensemble is improved to the maximum a posteriori (MAP) threshold of the underlying ensemble. More specifically, we derive a generalized extrinsic information transfer (GEXIT) curve for the joint decoder that naturally obeys the area theorem and estimate the MAP and BP thresholds. We also conjecture that SC codes due to threshold saturation can universally approach the symmetric information rate of ISI channels. In the third part, a similar analysis is used to analyze the MAP thresholds of LDPC codes for several multiuser systems, namely a noisy Slepian-Wolf problem and a multiple access channel with erasures. We provide rigorous analysis and derive upper bounds on the MAP thresholds which are shown to be tight in some cases. This analysis is a first step towards proving threshold saturation for these systems which would imply SC codes with joint BP decoding can universally approach the entire capacity region of the corresponding systems.
212

Comparison Of Decoding Algorithms For Low-density Parity-check Codes

Kolayli, Mert 01 September 2006 (has links) (PDF)
Low-density parity-check (LDPC) codes are a subclass of linear block codes. These codes have parity-check matrices in which the ratio of the non-zero elements to all elements is low. This property is exploited in defining low complexity decoding algorithms. Low-density parity-check codes have good distance properties and error correction capability near Shannon limits. In this thesis, the sum-product and the bit-flip decoding algorithms for low-density parity-check codes are implemented on Intel Pentium M 1,86 GHz processor using the software called MATLAB. Simulations for the two decoding algorithms are made over additive white gaussian noise (AWGN) channel changing the code parameters like the information rate, the blocklength of the code and the column weight of the parity-check matrix. Performance comparison of the two decoding algorithms are made according to these simulation results. As expected, the sum-product algorithm, which is based on soft-decision decoding, outperforms the bit-flip algorithm, which depends on hard-decision decoding. Our simulations show that the performance of LDPC codes improves with increasing blocklength and number of iterations for both decoding algorithms. Since the sum-product algorithm has lower error-floor characteristics, increasing the number of iterations is more effective for the sum-product decoder compared to the bit-flip decoder. By having better BER performance for lower information rates, the bit-flip algorithm performs according to the expectations / however, the performance of the sum-product decoder deteriorates for information rates below 0.5 instead of improving. By irregular construction of LDPC codes, a performance improvement is observed especially for low SNR values.
213

LDPC Coded OFDM-IDMA Systems

Lu, Kuo-sheng 05 August 2009 (has links)
Recently, a novel technique for multi-user spread-spectrum mobile systems, the called interleave-division multiple-access (IDMA) scheme, was proposed by L. Ping etc. The advantage of IDMA is that it inherits many special features from code-division multiple-access (CDMA) such as diversity against fading and mitigation of the other-cell user interference. Moreover, it¡¦s capable of employing a very simple chip-by-chip iterative multi-user detection strategy. In this thesis, we investigate the performance of combining IDMA and orthogonal frequency-division multiplexing (OFDM) scheme. In order to improve the bit error rate performance, we applied low-density parity-check (LDPC) coding to the proposed scheme, named by LDPC Coded OFDM-IDMA Systems. Based on the aid of iterative multi-user detection algorithm, the multiple-access interference (MAI) and inter-symbol interference (ISI) could be canceling efficiently. In short, the proposed scheme provides an efficient solution to high-rate multiuser communications over multipath fading channels.
214

Applications of graph-based codes in networks: analysis of capacity and design of improved algorithms

Vellambi, Badri Narayanan 25 August 2008 (has links)
The conception of turbo codes by Berrou et al. has created a renewed interest in modern graph-based codes. Several encouraging results that have come to light since then have fortified the role these codes shall play as potential solutions for present and future communication problems. This work focuses on both practical and theoretical aspects of graph-based codes. The thesis can be broadly categorized into three parts. The first part of the thesis focuses on the design of practical graph-based codes of short lengths. While both low-density parity-check codes and rateless codes have been shown to be asymptotically optimal under the message-passing (MP) decoder, the performance of short-length codes from these families under MP decoding is starkly sub-optimal. This work first addresses the structural characterization of stopping sets to understand this sub-optimality. Using this characterization, a novel improved decoder that offers several orders of magnitude improvement in bit-error rates is introduced. Next, a novel scheme for the design of a good rate-compatible family of punctured codes is proposed. The second part of the thesis aims at establishing these codes as a good tool to develop reliable, energy-efficient and low-latency data dissemination schemes in networks. The problems of broadcasting in wireless multihop networks and that of unicast in delay-tolerant networks are investigated. In both cases, rateless coding is seen to offer an elegant means of achieving the goals of the chosen communication protocols. It was noticed that the ratelessness and the randomness in encoding process make this scheme specifically suited to such network applications. The final part of the thesis investigates an application of a specific class of codes called network codes to finite-buffer wired networks. This part of the work aims at establishing a framework for the theoretical study and understanding of finite-buffer networks. The proposed Markov chain-based method extends existing results to develop an iterative Markov chain-based technique for general acyclic wired networks. The framework not only estimates the capacity of such networks, but also provides a means to monitor network traffic and packet drop rates on various links of the network.
215

Computational Problems In Codes On Graphs

Krishnan, K Murali 07 1900 (has links)
Two standard graph representations for linear codes are the Tanner graph and the tailbiting trellis. Such graph representations allow the decoding problem for a code to be phrased as a computational problem on the corresponding graph and yield graph theoretic criteria for good codes. When a Tanner graph for a code is used for communication across a binary erasure channel (BEC) and decoding is performed using the standard iterative decoding algorithm, the maximum number of correctable erasures is determined by the stopping distance of the Tanner graph. Hence the computational problem of determining the stopping distance of a Tanner graph is of interest. In this thesis it is shown that computing stopping distance of a Tanner graph is NP hard. It is also shown that there can be no (1 + є ) approximation algorithm for the problem for any є > 0 unless P = NP and that approximation ratio of 2(log n)1- є for any є > 0 is impossible unless NPCDTIME(npoly(log n)). One way to construct Tanner graphs of large stopping distance is to ensure that the graph has large girth. It is known that stopping distance increases exponentially with the girth of the Tanner graph. A new elementary combinatorial construction algorithm for an almost regular LDPC code family with provable Ώ(log n) girth and O(n2) construction complexity is presented. The bound on the girth is close within a factor of two to the best known upper bound on girth. The problem of linear time exact maximum likelihood decoding of tailbiting trellis has remained open for several years. An O(n) complexity approximate maximum likelihood decoding algorithm for tail-biting trellises is presented and analyzed. Experiments indicate that the algorithm performs close to the ideal maximum likelihood decoder.
216

Ανάλυση επιπτώσεων αριθμητικών προσεγγίσεων σε επαναληπτικούς αποκωδικοποιητές για γραμμικούς κώδικες διόρθωσης σφαλμάτων

Αστάρας, Στέφανος 21 February 2015 (has links)
Σε αυτή την εργασία μελετάμε τους αλγορίθμους που χρησιμοποιούνται στην αποκωδικοποίηση των LDPC, με έμφαση στους κώδικες του προτύπου 802.11n. Αντιμετωπίζουμε τις δυσκολίες που αντιμετωπίζουν στην υλοποίηση στο υλικό, κυρίως στην εκτέλεση αριθμητικών πράξεων, και προτείνουμε πρακτικές λύσεις. Χρησιμοποιώντας τα αποτελέσματα εκτενών εξομοιώσεων, καταλήγουμε στις βέλτιστες παραμέτρους που θα έχουν οι προτεινόμενες υλοποιήσεις. / In this thesis, we study the LDPC decoding algorithms, with emphasis on the 802.11n standard codes. We tackle the hardware implementation difficulties, especially those related to arithmetic computations, and propose practical solutions. Leveraging the results of extensive simulations, we find the optimal parameters of the proposed implementations.
217

Ανάλυση, σχεδιασμός και υλοποίηση κωδίκων διόρθωσης λαθών για τηλεπικοινωνιακές εφαρμογές υψηλών ταχυτήτων

Αγγελόπουλος, Γεώργιος 20 October 2009 (has links)
Σχεδόν όλα τα σύγχρονα τηλεπικοινωνιακά συστήματα, τα οποία προορίζονται για αποστολή δεδομένων σε υψηλούς ρυθμούς, έχουν υιοθετήσει κώδικες διόρθωσης λαθών για την αύξηση της αξιοπιστίας και τη μείωση της απαιτούμενης ισχύος εκπομπής τους. Μια κατηγορία κωδίκων, και μάλιστα με εξαιρετικές επιδόσεις, είναι η οικογένεια των LPDC κωδίκων (Low-Density-Parity-Check codes). Οι κώδικες αυτοί είναι γραμμικοί block κώδικες με απόδοση πολύ κοντά στο όριο του Shannon. Επιπλέον, ο εύκολος παραλληλισμός της διαδικασίας αποκωδικοποίησής τους, τους καθιστά κατάλληλους για υλοποίηση σε υλικό. Στην παρούσα διπλωματική μελετούμε τα ιδιαίτερα χαρακτηριστικά και τις παραμέτρους των κωδίκων αυτών, ώστε να κατανοήσουμε την εκπληκτική διορθωτική ικανότητά τους. Στη συνέχεια, επιλέγουμε μια ειδική κατηγορία κωδίκων LDPC, της οποίας οι πίνακες ελέγχου ισοτιμίας έχουν δημιουργηθεί ώστε να διευκολύνουν την υλοποίησή τους, και προχωρούμε στο σχεδιασμό αυτής σε υλικό. Πιο συγκεκριμένα, υλοποιούμε σε VHDL έναν αποκωδικοποιητή σύμφωνα με τον rate ½ και block_lenght 576 bits πίνακα του προτύπου WiMax 802.16e, με στόχο κυρίως την επίτευξη πολύ υψηλού throughput. Στο χρονοπρογραμματισμό της μετάδοσης των μηνυμάτων μεταξύ των κόμβων του κυκλώματος χρησιμοποιούμε το two-phase scheduling και προτείνουμε δύο τροποποιήσεις αυτού για την επιτάχυνση της διαδικασίας αποκωδικοποίησης, οι οποίες καταλήγουν σε 24 και 50% βελτίωση του απαιτούμενου χρόνου μιας επανάληψης με μηδενική και σχετικά μικρή αύξηση της επιφάνειας ολοκλήρωσης αντίστοιχα. Ο όλος σχεδιασμός είναι πλήρως συνθέσιμος και η σωστή λειτουργία αυτού έχει επιβεβαιωθεί σε επίπεδο λογικής εξομοίωσης. Κατά τη διάρκεια σχεδιασμού, χρησιμοποιήθηκαν εργαλεία της Xilinx και MentorGraphics. / Αlmost all the modern telecommunication systems, which are designed for high data rate transmissions, have adopted error correction codes for improving the reliability and the required power of transmission. One special group of these codes, with extremely good performance, is the LDPC codes (Low-Density-Parity-Check codes). These codes are linear block codes with performance near to the theoretical Shannon limit. Furthermore, the inherent parallelism of the decoding procedure makes them suitable for implementation on hardware. In this thesis, we study the special characteristics of these codes in order to understand their astonishing correcting capability. Then, we choose a special category of these codes, whose parity check matrix are special designed to facilitate their implementation on hardware, and we design a high-throughput decoder. More specifically, we implement in VHDL an LDPC decoder according to the rate ½ and block_length 576 bits code of WiMax IEEE802.16e standard, with main purpose to achieve very high throughput. We use the two-phase scheduling at the message passing and we propose 2 modifications for reducing the required decoding time, which result in 25 and 50% improving of the required decoding time of one iteration with zero and little increasing in the decoder’s area respectively. Our design has been successfully simulated and synthesized. During the design process, we used Xiinx and MentorGraphics’s tools.
218

On The Analysis of Spatially-Coupled GLDPC Codes and The Weighted Min-Sum Algorithm

Jian, Yung-Yih 16 December 2013 (has links)
This dissertation studies methods to achieve reliable communication over unreliable channels. Iterative decoding algorithms for low-density parity-check (LDPC) codes and generalized LDPC (GLDPC) codes are analyzed. A new class of error-correcting codes to enhance the reliability of the communication for high-speed systems, such as optical communication systems, is proposed. The class of spatially-coupled GLDPC codes is studied, and a new iterative hard- decision decoding (HDD) algorithm for GLDPC codes is introduced. The main result is that the minimal redundancy allowed by Shannon’s Channel Coding Theorem can be achieved by using the new iterative HDD algorithm with spatially-coupled GLDPC codes. A variety of low-density parity-check (LDPC) ensembles have now been observed to approach capacity with iterative decoding. However, all of them use soft (i.e., non-binary) messages and a posteriori probability (APP) decoding of their component codes. To the best of our knowledge, this is the first system that can approach the channel capacity using iterative HDD. The optimality of a codeword returned by the weighted min-sum (WMS) algorithm, an iterative decoding algorithm which is widely used in practice, is studied as well. The attenuated max-product (AttMP) decoding and weighted min-sum (WMS) decoding for LDPC codes are analyzed. Applying the max-product (and belief- propagation) algorithms to loopy graphs are now quite popular for best assignment problems. This is largely due to their low computational complexity and impressive performance in practice. Still, there is no general understanding of the conditions required for convergence and/or the optimality of converged solutions. This work presents an analysis of both AttMP decoding and WMS decoding for LDPC codes which guarantees convergence to a fixed point when a weight factor, β, is sufficiently small. It also shows that, if the fixed point satisfies some consistency conditions, then it must be both a linear-programming (LP) and maximum-likelihood (ML) decoding solution.
219

Iterative joint detection and decoding of LDPC-Coded V-BLAST systems

Tsai, Meng-Ying (Brady) 10 July 2008 (has links)
Soft iterative detection and decoding techniques have been shown to be able to achieve near-capacity performance in multiple-antenna systems. To obtain the optimal soft information by marginalization over the entire observation space is intractable; and the current literature is unable to guide us towards the best way to obtain the suboptimal soft information. In this thesis, several existing soft-input soft-output (SISO) detectors, including minimum mean-square error-successive interference cancellation (MMSE-SIC), list sphere decoding (LSD), and Fincke-Pohst maximum-a-posteriori (FPMAP), are examined. Prior research has demonstrated that LSD and FPMAP outperform soft-equalization methods (i.e., MMSE-SIC); however, it is unclear which of the two scheme is superior in terms of performance-complexity trade-off. A comparison is conducted to resolve the matter. In addition, an improved scheme is proposed to modify LSD and FPMAP, providing error performance improvement and a reduction in computational complexity simultaneously. Although list-type detectors such as LSD and FPMAP provide outstanding error performance, issues such as the optimal initial sphere radius, optimal radius update strategy, and their highly variable computational complexity are still unresolved. A new detection scheme is proposed to address the above issues with fixed detection complexity, making the scheme suitable for practical implementation. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2008-07-08 19:29:17.66
220

Digit-Online LDPC Decoding

Marshall, Philip A. Unknown Date
No description available.

Page generated in 0.0335 seconds