Return to search

Improving the Error Floor Performance of LDCP Codes with Better Codes and Better Decoders

Error correcting codes are used in virtually all communication systems to ensure reliable transmission of information. In 1948, Shannon established an upper-bound on the maximum rate at which information can be transmitted reliably over a noisy channel. Reliably transmitting information with a rate close to this theoretical limit, known as the channel capacity, has been the goal of channel coding scientists for the last five decades. The rediscovery of low-density parity-check (LDPC) codes in the 1990s added much-renewed excitement in the coding community. LDPC codes are interesting because they can approach channel capacity under sub-optimum decoding algorithms whose complexity is linear in the code length. Unsurprisingly, LDPC codes quickly attained their popularity in practical applications such as magnetic storage, wireless and optical communications. One, if not the most, important and challenging problem in LDPC code research is the study and analysis of the error floor phenomenon. This phenomenon is described as an abrupt degradation in the frame error rate performance of LDPC codes in the high signal-to-noise ratio region. Error floor is harmful because its presence prevents the LDPC decoder from reaching very low probability of decoding failure, an important requirement for many applications. Not long after the rediscovery of LDPC codes, scientists established that error floor is caused by certain harmful structures, most commonly known as trapping sets, in the Tanner representation of a code. Since then, the study of error floor mostly consists of three major problems: 1) estimating error floor; 2) constructing LDPC codes with low error floor and 3) designing decoders that are less susceptible to error floor. Although some parts of this dissertation can be used as important elements in error floor estimation, our main contributions are a novel method for constructing LDPC codes with low error floor and a novel class of low complexity decoding algorithms that can collectively alleviate error floor. These contributions are summarized as follows. A method to construct LDPC codes with low error floors on the binary symmetric channel is presented. Codes are constructed so that their Tanner graphs are free of certain small trapping sets. These trapping sets are selected from the Trapping Set Ontology for the Gallager A/B decoder. They are selected based on their relative harmfulness for a given decoding algorithm. We evaluate the relative harmfulness of different trapping sets for the sum-product algorithm by using the topological relations among them and by analyzing the decoding failures on one trapping set in the presence or absence of other trapping sets. We apply this method to construct structured LDPC codes. To facilitate the discussion, we give a new description of structured LDPC codes whose parity-check matrices are arrays of permutation matrices. This description uses Latin squares to define a set of permutation matrices that have disjoint support and to derive a simple necessary and sufficient condition for the Tanner graph of a code to be free of four-cycles. A new class of bit flipping algorithms for LDPC codes over the binary symmetric channel is proposed. Compared to the regular (parallel or serial) bit flipping algorithms, the proposed algorithms employ one additional bit at a variable node to represent its "strength." The introduction of this additional bit allows an increase in the guaranteed error correction capability. An additional bit is also employed at a check node to capture information which is beneficial to decoding. A framework for failure analysis and selection of two-bit bit flipping algorithms is provided. The main component of this framework is the (re)definition of trapping sets, which are the most "compact" Tanner graphs that cause decoding failures of an algorithm. A recursive procedure to enumerate trapping sets is described. This procedure is the basis for selecting a collection of algorithms that work well together. It is demonstrated that decoders which employ a properly selected group of the proposed algorithms operating in parallel can offer high speed and low error floor decoding.

Identiferoai:union.ndltd.org:arizona.edu/oai:arizona.openrepository.com:10150/255172
Date January 2012
CreatorsNguyen, Dung Viet
ContributorsVasic, Bane, Vasic, Bane, Marcellin, Michael W., Djordjevic, Ivan
PublisherThe University of Arizona.
Source SetsUniversity of Arizona
LanguageEnglish
Detected LanguageEnglish
Typetext, Electronic Dissertation
RightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.

Page generated in 0.0018 seconds