Spelling suggestions: "subject:"LDPC modes"" "subject:"LDPC codes""
11 |
LDPC Codes over Large Alphabets and Their Applications to Compressed Sensing and Flash MemoryZhang, Fan 2010 August 1900 (has links)
This dissertation is mainly focused on the analysis, design and optimization of Low-density parity-check (LDPC) codes over channels with large alphabet sets and the applications on compressed sensing (CS) and flash memories. Compared to belief-propagation (BP) decoding, verification-based (VB) decoding has significantly lower complexity and near optimal performance when the channel alphabet set is large. We analyze the verification-based decoding of LDPC codes over the q-ary symmetric channel (q-SC) and propose the list-message-passing (LMP) decoding which off ers a good tradeoff between complexity and decoding threshold. We prove that LDPC codes with LMP decoding achieve the capacity of the q-SC when q and the block length go to infinity. CS is a newly emerging area which is closely related to coding theory and information theory. CS deals with the sparse signal recovery problem with small number of linear measurements. One big challenge in CS literature is to reduce the number of measurements required to reconstruct the sparse signal. In this dissertation, we show that LDPC codes with verification-based decoding can be applied to CS system with surprisingly good performance and low complexity. We also discuss modulation codes and error correcting codes (ECC’s) design for flash memories. We design asymptotically optimal modulation codes and discuss their improvement by using the idea from load-balancing theory. We also design LDPC codes over integer rings and fields with large alphabet sets for flash memories.
|
12 |
Capacity and Coding for 2D ChannelsKhare, Aparna 2010 December 1900 (has links)
Consider a piece of information printed on paper and scanned in the form of an
image. The printer, scanner, and the paper naturally form a communication channel,
where the printer is equivalent to the sender, scanner is equivalent to the receiver,
and the paper is the medium of communication. The channel created in this way is
quite complicated and it maps 2D input patterns to 2D output patterns. Inter-symbol
interference is introduced in the channel as a result of printing and scanning. During
printing, ink from the neighboring pixels can spread out. The scanning process can
introduce interference in the data obtained because of the finite size of each pixel and
the fact that the scanner doesn't have infinite resolution. Other degradations in the
process can be modeled as noise in the system. The scanner may also introduce some
spherical aberration due to the lensing effect. Finally, when the image is scanned,
it might not be aligned exactly below the scanner, which may lead to rotation and
translation of the image.
In this work, we present a coding scheme for the channel, and possible solutions for a
few of the distortions stated above. Our solution consists of the structure, encoding
and decoding scheme for the code, a scheme to undo the rotational distortion, and
an equalization method.
The motivation behind this is the question: What is the information capacity of paper. The purpose is to find out how much data can be printed out and retrieved
successfully. Of course, this question has potential practical impact on the design of
2D bar codes, which is why encodability is a desired feature. There are also a number
of other useful applications however.
We could successfully decode 41.435 kB of data printed on a paper of size 6.7 X 6.7
inches using a Xerox Phasor 550 printer and a Canon CanoScan LiDE200 scanner. As
described in the last chapter, the capacity of the paper using this channel is clearly
greater than 0.9230 kB per square inch. The main contribution of the thesis lies in
constructing the entire system and testing its performance. Since the focus is on
encodable and practically implementable schemes, the proposed encoding method is
compared with another well known and easily encodable code, namely the repeat
accumulate code.
|
13 |
FPGA Implementation of a Clockless Stochastic LDPC DecoderChristopher, Ceroici January 2014 (has links)
This thesis presents a clockless stochastic low-density parity-check (LDPC)
decoder implemented on a Field-Programmable Gate Array (FPGA).
Stochastic computing reduces the wiring complexity necessary for
decoding by replacing operations such as multiplication and division with
simple logic gates. Clockless decoding increases the throughput of the
decoder by eliminating the requirement for node signals to be synchronized
after each decoding cycle. With this partial-update algorithm the decoder’s
speed is limited by the average wire delay of the interleaver rather than the
worst-case delay. This type of decoder has been simulated in the past but
not implemented on silicon. The design is implemented on an ALTERA
Stratix IV EP4SGX230 FPGA and the frame error rate (FER) performance,
throughput and power consumption are presented for (96,48) and (204,102)
decoders.
|
14 |
Flexible encoder and decoder designs for low-density parity-check codesKopparthi, Sunitha January 1900 (has links)
Doctor of Philosophy / Department of Electrical and Computer Engineering / Don M. Gruenbacher / Future technologies such as cognitive radio require flexible and reliable hardware architectures that can be easily configured and adapted to varying coding parameters. The objective of this work is to develop a flexible hardware encoder and decoder for low-density parity-check (LDPC) codes. The design methodologies used for the implementation of a LDPC encoder and decoder are flexible in terms of parity-check matrix, code rate and code length. All these designs are implemented on a programmable chip and tested.
Encoder implementations of LDPC codes are optimized for area due to their high complexity. Such designs usually have relatively low data rate. Two new encoder designs are developed that achieve much higher data rates of up to 844 Mbps while requiring more area for implementation. Using structured LDPC codes decreases the encoding complexity and provides design flexibility. The architecture for an encoder is presented that adheres to the structured LDPC codes defined in the IEEE 802.16e standard.
A single encoder design is also developed that accommodates different code lengths and code rates and does not require re-synthesis of the design in order to change the encoding parameters. The flexible encoder design for structured LDPC codes is also implemented on a custom chip. The maximum coded data rate of the structured encoder is up to 844 Mbps and for a given code rate its value is independent of the code length.
An LDPC decoder is designed and its design methodology is generic. It is applicable to both structured and any randomly generated LDPC codes. The coded data rate of the decoder increases with the increase in the code length. The number of decoding iterations used for the decoding process plays an important role in determining the decoder performance and latency. This design validates the estimated codeword after every iteration and stops the decoding process when the correct codeword is estimated which saves power consumption. For a given parity-check matrix and signal-to-noise ratio, a procedure to find an optimum value of the maximum number of decoding iterations is presented that considers the affects of power, delay, and error performance.
|
15 |
Turbo Equalization for OFDM over the Doubly-Spread Channel using Nonlinear ProgrammingIltis, Ronald A. 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / OFDM has become the preferred modulation format for a wide range of wireless networks including 802.11g, 802.16e (WiMAX) and 4G LTE. For multipath channels which are time-invariant during an OFDM symbol duration, near-optimal demodulation is achieved using the FFT followed by scalar equalization. However, demodulating OFDM on the doubly-spread channel remains a challenging problem, as time-variations within a symbol generate intercarrier interference. Furthermore, demodulation and channel estimation must be effectively combined with decoding of the LDPC code in the 4G-type system considered here. This paper presents a new Turbo Equalization (TEQ) decoder, detector and channel estimator for OFDM on the doubly-spread channel based on nonlinear programming. We combine the Penalty Gradient Projection TEQ with a MMSE-type channel estimator (PGP-TEQ) that is shown to yield a convergent algorithm. Simulation results are presented comparing conventional MMSE TEQ using the Sum Product Algorithm (MMSE-SPA-TEQ) with the new PGP-TEQ for doubly-spread channels.
|
16 |
FPGA implementation of advanced FEC schemes for intelligent aggregation networksZou, Ding, Djordjevic, Ivan B. 13 February 2016 (has links)
In state-of-the-art fiber-optics communication systems the fixed forward error correction (FEC) and constellation size are employed. While it is important to closely approach the Shannon limit by using turbo product codes (TPC) and low-density parity-check (LDPC) codes with soft-decision decoding (SDD) algorithm; rate-adaptive techniques, which enable increased information rates over short links and reliable transmission over long links, are likely to become more important with ever-increasing network traffic demands. In this invited paper, we describe a rate adaptive non-binary LDPC coding technique, and demonstrate its flexibility and good performance exhibiting no error floor at BER down to 10(-15) in entire code rate range, by FPGA-based emulation, making it a viable solution in the next-generation high-speed intelligent aggregation networks.
|
17 |
An FPGA design of generalized low-density parity-check codes for rate-adaptive optical transport networksZou, Ding, Djordjevic, Ivan B. 13 February 2016 (has links)
Forward error correction (FEC) is as one of the key technologies enabling the next-generation high-speed fiber optical communications. In this paper, we propose a rate-adaptive scheme using a class of generalized low-density parity-check (GLDPC) codes with a Hamming code as local code. We show that with the proposed unified GLDPC decoder architecture, a variable net coding gains (NCGs) can be achieved with no error floor at BER down to 10(-15), making it a viable solution in the next-generation high-speed fiber optical communications.
|
18 |
Error Errore Eicitur: A Stochastic Resonance Paradigm for Reliable Storage of Information on Unreliable MediaIvanis, Predrag, Vasic, Bane 09 1900 (has links)
We give an architecture of a storage system consisting of a storage medium made of unreliable memory elements and an error correction circuit made of a combination of noisy and noiseless logic gates that is capable of retaining the stored information with the lower probability of error than a storage system with a correction circuit made completely of noiseless logic gates. Our correction circuit is based on the iterative decoding of low-density parity check codes, and uses the positive effect of errors in logic gates to correct errors in memory elements. In the spirit of Marcus Tullius Cicero's Clavus clavo eicitur (one nail drives out another), the proposed storage system operates on the principle: error errore eicitur-one error drives out another. The randomness that is present in the logic gates makes these classes of decoders superior to their noiseless counterparts. Moreover, random perturbations do not require any additional computational resources as they are inherent to unreliable hardware itself. To utilize the benefits of logic gate failures, our correction circuit relies on two key novelties: a mixture of reliable and unreliable gates and decoder rewinding. We present a method based on absorbing Markov chains for the probability of error analysis, and explain how the randomness in the variable and check node update function helps a decoder to escape to local minima associated with trapping sets.
|
19 |
Optimisation des stratégies de décodage des codes LDPC dans les environnements impulsifs : application aux réseaux de capteurs et ad hoc / LDPC strategy decoding optimization in impulsive environments : sensors and ad hoc networks applicationBen Maad, Hassen 29 June 2011 (has links)
L’objectif de cette thèse est d’étudier le comportement des codes LDPC dans un environnement où l’interférence générée par un réseau n’est pas de nature gaussienne mais présente un caractère impulsif. Un premier constat rapide montre que sans précaution, les performances de ces codes se dégradent très significativement. Nous étudions tout d’abord les différentes solutions possibles pour modéliser les bruits impulsifs. Dans le cas des interférences d’accès multiples qui apparaissent dans les réseaux ad hoc et les réseaux de capteurs, il nous semble approprié de choisir les distributions alpha-stables. Généralisation de la gaussienne, stables par convolution, elles peuvent être validées théoriquement dans plusieurs situations.Nous déterminons alors la capacité de l’environnement α-stable et montrons par une approche asymptotique que les codes LDPC dans cet environnement sont bons mais qu’une simple opération linéaire à l’entrée du décodeur ne permet pas d’obtenir de bonnes performances. Nous avons donc proposé différentes façons de calculer la vraisemblance en entrée du décodeur. L’approche optimale est très complexe à mettre en oeuvre. Nous avons étudié plusieurs approches différentes et en particulier le clipping dont nous avons cherché les paramètres optimaux. / The goal of this PhD is to study the performance of LDPC codes in an environment where interference, generated by the network, has not a Gaussian nature but presents an impulsive behavior.A rapid study shows that, if we do not take care, the codes’ performance significantly degrades.In a first step, we study different approaches for impulsive noise modeling. In the case of multiple access interference that disturb communications in ad hoc or sensor networks, the choice of alpha-stable distributions is appropriate. They generalize Gaussian distributions, are stable by convolution and can be theoretically justified in several contexts.We then determine the capacity if the α-stable environment and show using an asymptotic method that LDPC codes in such an environment are efficient but that a simple linear operation on the received samples at the decoder input does not allow to obtain the expected good performance. Consequently we propose several methods to obtain the likelihood ratio necessary at the decoder input. The optimal solution is highly complex to implement. We have studied several other approaches and especially the clipping for which we proposed several approaches to determine the optimal parameters.
|
20 |
Turbo égalisation à haute performance pour la transmission par satellite au-delà de la cadence de Nyquist / High performance turbo equalisation for faster-than-Nyquist satellite communicationsAbelló Barberán, Albert 15 November 2018 (has links)
Le contexte de ces travaux de thèse est la transmission dite faster-than-Nyquist (FTN). Cette technique propose d’augmenter l’efficacité spectrale en augmentant lerythme de transmission au-delà de la bande occupée par le signal émis, indépendamment de laconstellation choisie. Il a été montré que le FTN offre des taux d’information supérieurs à ceuxdes systèmes de Nyquist. Toutefois, le non respect du critère de Nyquist entraîne l’apparitiond’interférence entre symboles et des techniques de réception appropriées doivent être utilisées.La technique de réception dite channel shortening consiste à filtrer la séquence reçue puis àcalculer des probabilités symbole a posteriori approximatives à l’aide de l’algorithme BCJRen considérant une réponse de canal modifiée, de longueur réduite. Dans la littérature, enprésence d’information a priori, les filtres du récepteur channel shortening sont optimiséssous critère de maximisation de l’information mutuelle généralisée (IMG) en utilisant desméthodes numériques. Nous proposons dans ces travaux de thèse une solution analytiquepour l’ensemble des filtres channel shortening sous critère de maximisation de l’IMG lorsquele récepteur dispose d’information a priori. Nous démontrons ensuite que l’égaliseur au sens dela minimisation de l’erreur quadratique moyenne (MMSE) est un cas particulier de l’égaliseurchannel shortening. Dans le cadre de la turbo égalisation, nous étudions ensuite un estimateurpermettant d’obtenir l’information a priori à partir de l’information en sortie du décodeurcorrecteur d’erreurs. Finalement, nous évaluons les performances du système complet aveccodage correcteur d’erreurs sur canal à bruit additif blanc Gaussien. / In order to increase the spectral efficiency of digital communications systems,the faster-than-Nyquist (FTN) approach increases the symbol rate beyond the occupied bandwidthof the transmitted signal independently of the constellation type and size. It has beenshown that information rates of FTN systems are greater than those of Nyquist systems.However, the non-compliance of the Nyquist criterion causes inter-symbol interference to appearand therefore appropriate reception techniques must be used. At reception, the channelshortening approach consists on a receiving filter followed by a BCJR algorithm computingapproximate a posteriori symbol probabilities by considering a modified channel response ofreduced length. In the literature, the channel shortening receiving filters are chosen to maximizethe generalized mutual information (GMI). Such optimization is performed by usingnumerical optimization methods. In this PhD thesis, we propose a closed-form solution forall channel shortening filters considering the GMI maximization criterion. We show that theminimum mean square error (MMSE) equalizer is a particular case of the channel shorteningapproach. Within the frame of turbo equalization, we then study a suitable estimator allowingto obtain symbols a priori information from the information provided by the a decoder. Finally,we study the performance of the complete system with channel coding over an additivewhite Gaussian noise channel.
|
Page generated in 0.0379 seconds