• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Linear Programming Decoding for Non-Uniform Sources and for Binary Channels With Memory

Cohen, ADAM 09 December 2008 (has links)
Linear programming (LP) decoding of low-density parity-check codes was introduced by Feldman et al. in [1]. In his formulation it is assumed that communication takes place over a memoryless channel and that the source is uniform. Here, we extend the LP decoding paradigm by studying its application to scenarios with source non-uniformity and to decoding over channels with memory. We develop two decoders for the scenario of non-uniform memoryless sources transmitted over memoryless channels. The first decoder uses a modified linear cost function which incorporates the a-priori source information and works with systematic codes. The second decoder differs by using non-systematic codes obtained by puncturing lower rate systematic codes and using an “extended decoding polytope.” Simulations show that the modified decoders yield gains over the standard LP decoder. Next, LP decoding is considered for two channels with memory: the binary additive Markov noise channel and the infinite-memory non-ergodic Polya-contagion channel. For the Markov channel, no linear cost function corresponding to maximum likelihood (ML) decoding could be obtained and hence it is unclear how to proceed. For the Polya channel, two LP-based decoders are developed. The first is derived in a straightforward manner from the ML decoding rule of [2]. The second decoder relies on a simplification of the same ML decoding rule which holds for codes containing the all-ones codeword. Simulations are performed for both decoders with regular and irregular LDPC codes and demonstrate relatively good performance with respect to the channel epsilon-capacity. / Thesis (Master, Mathematics & Statistics) -- Queen's University, 2008-12-08 16:24:43.358
2

On The Analysis of Spatially-Coupled GLDPC Codes and The Weighted Min-Sum Algorithm

Jian, Yung-Yih 16 December 2013 (has links)
This dissertation studies methods to achieve reliable communication over unreliable channels. Iterative decoding algorithms for low-density parity-check (LDPC) codes and generalized LDPC (GLDPC) codes are analyzed. A new class of error-correcting codes to enhance the reliability of the communication for high-speed systems, such as optical communication systems, is proposed. The class of spatially-coupled GLDPC codes is studied, and a new iterative hard- decision decoding (HDD) algorithm for GLDPC codes is introduced. The main result is that the minimal redundancy allowed by Shannon’s Channel Coding Theorem can be achieved by using the new iterative HDD algorithm with spatially-coupled GLDPC codes. A variety of low-density parity-check (LDPC) ensembles have now been observed to approach capacity with iterative decoding. However, all of them use soft (i.e., non-binary) messages and a posteriori probability (APP) decoding of their component codes. To the best of our knowledge, this is the first system that can approach the channel capacity using iterative HDD. The optimality of a codeword returned by the weighted min-sum (WMS) algorithm, an iterative decoding algorithm which is widely used in practice, is studied as well. The attenuated max-product (AttMP) decoding and weighted min-sum (WMS) decoding for LDPC codes are analyzed. Applying the max-product (and belief- propagation) algorithms to loopy graphs are now quite popular for best assignment problems. This is largely due to their low computational complexity and impressive performance in practice. Still, there is no general understanding of the conditions required for convergence and/or the optimality of converged solutions. This work presents an analysis of both AttMP decoding and WMS decoding for LDPC codes which guarantees convergence to a fixed point when a weight factor, β, is sufficiently small. It also shows that, if the fixed point satisfies some consistency conditions, then it must be both a linear-programming (LP) and maximum-likelihood (ML) decoding solution.

Page generated in 0.1502 seconds