Spelling suggestions: "subject:"cow density c1arity check core"" "subject:"cow density c1arity sheck core""
1 |
Puncturing, mapping, and design of low-density parity-check codesRichter, Gerd January 2008 (has links)
Zugl.: Ulm, Univ., Diss., 2008
|
2 |
Utilizing Correct Prior Probability Calculation to Improve Performance of Low-Density Parity-Check Codes in the Presence of Burst NoiseNeal, David A. 01 May 2012 (has links)
Low-density parity-check (LDPC) codes provide excellent error correction performance and can approach the channel capacity, but their performance degrades significantly in the presence of burst noise. Bursts of errors occur in many common channels, including the magnetic recording and the wireless communications channels. Strategies such as interleaving have been developed to help compensate for bursts errors. These techniques do not exploit the correlations that can exist between the noise variance on observations in and out of the bursts. These differences can be exploited in calculations of prior probabilities to improve accuracy of soft information that is sent to the LDPC decoder.
Effects of using different noise variances in the calculation of prior probabilities are investigated. Using the true variance of each observation improves performance. A novel burst detector utilizing the forward/backward algorithm is developed to determine the state of each observation, allowing the correct variance to be selected for each. Comparisons between this approach and existing techniques demonstrate improved performance. The approach is generalized and potential future research is discussed.
|
3 |
Circuit Design of LDPC Decoder for IEEE 802.16e systemsWang, Jhih-hao 29 March 2010 (has links)
A circuit design of Low Density Parity Check (LDPC) decoder for IEEE 802.16e systems is with new overlapped method is proposed in this thesis. This circuit can be operated with 19 modes which are corresponding to block sizes of 576, ¡K, 2304. LDPC decoders can be implemented by using iterations with Variable Node and Check Node Processes. The hardware utilization ratio, which can be enhanced from 50% to 100% by using our proposed overlapped method, is better than traditional overlapped method. In [2], the traditional overlapped method utilization ratio just can be enhanced from 50% to 75% for IEEE 802.16e LDPC decoder with code rate 1/2. Under the same operating frequency, our proposed method can further increase 25% when compared with traditional overlapped method [2]. In this thesis, we also propose two circuit architectures to increase the operating frequency. First, we use a faster comparison circuit in our comparison unit [1]. Second, we use Carry Save Adder¡]CSA¡^method [8] to replace the common adder unit.
The circuit is carried out by TSMC CMOS 0.18£gm 1P6M process with chip area 3.11 x 3.08 mm2. In the gate level simulation, the output data rate of this circuit is above 78.4MHz, so the circuit can meet the requirement of IEEE 802.16e system.
|
4 |
Hybrid Compressed-and-Forward Relaying Based on Compressive Sensing and Distributed LDPC CodesLin, Yu-Liang 26 July 2012 (has links)
Cooperative communication has been shown that it is an effective way to combat the outage caused by channel fading; that is, it provides the spatial diversity for communication. Except for amplify-and-forward (AF) and decode-and-forward (DF), compressed-and-forward (CF) is also an efficient forwarding strategy. In this thesis, we proposed a new CF scheme. In the existing CF protocol, the relay will switch to the DF mode when the source transmitted signal can be recovered by the relay completely; no further compression is made in this scheme. In our proposed, the relay will estimate if the codeword in a block is succeeded decoded, choose the corresponding forwarding methods with LDPC coding; those are based on joint source-channel coding or compressive sensing. At the decode side, a joint decoder with side information that performs sum-product algorithm (SPA) to decode the source message. Simulation results show that the proposed CF scheme can acquire the spatial diversity and outperform AF and DF schemes.
|
5 |
Design of Low-Cost Low-Density Parity-Check Code DecoderLiao, Wei-Chung 06 September 2005 (has links)
With the enormous growing applications of mobile communications, how to reduce the power dissipation of wireless communication has become an important issue that attracts much attention. One of the key techniques to achieve low power transmission is to develop a powerful channel coding scheme which can perform good error correcting capability even at low signal-to-noise ratio. In recent years, the trend of the error control code development is based on the iterative decoding algorithm which can lead to higher coding gain. Especially, the rediscovery of the low-density parity-check code ¡]LDPC¡^has become the most famous code after the introduction of Turbo code since it is the code closest to the well-know Shannon limit. However, since the block size used in LDPC is usually very large, and the parity matrix used in LDPC is quite random, the hardware implementation of LDPC has become very difficult. It may require a significant number of arithmetic units as well as very complex routing topology. Therefore, this thesis will address several design issues of LDPC decoder. First, under no SNR estimation condition, some simulation results of several LDPC architectures are provided and have shown that some architectures can achieve close performance to those with SNR estimation. Secondly, a novel message quantization method is proposed and applied in the design LDPC to reduce to the memory and table sizes as well as routing complexity. Finally, several early termination schemes for LDPC are considered, and it is found that up to 42% of bit node operation can be saved.
|
6 |
Υλοποίηση επαναληπτικής αποκωδικοποίησης κωδικών LDPC για ασύρματους δέκτες MIMOΦρέσκος, Σταμάτιος 08 March 2010 (has links)
Στα πλαίσια αυτής της διπλωματικής εργασίας μελετήσαμε μεθόδους κωδικοποίησης με χρήση πινάκων ισοτιμίας μεγάλων διαστάσεων που έχουν χρησιμοποιηθεί και εφαρμοσθεί μέχρι τώρα σε προηγούμενες μελέτες. Επιλέξαμε τη σχεδίαση ενός αποκωδικοποιητή, που στηρίζεται στο WiMAX – 802.16e ΙΕΕΕ πρότυπο μετάδοσης και συγκεκριμένα με χρήση πομπού και δέκτη με περισσότερες από μία κεραίες. Παρουσιάζουμε, λοιπόν τη θεωρία που συσχετίζεται με το θέμα αυτό τόσο από την πλευρά της κωδικοποίησης όσο κι από την πλευρά της ασύρματης ΜΙΜΟ μετάδοσης και το πρότυπο WiMAX. Αναλύουμε κάθε τμήμα του συστήματός που προσομοιώνουμε και παραθέτουμε τα αποτελέσματα της προσομοίωσης. / -
|
7 |
Iterative Decoding Beyond Belief Propagation of Low-Density Parity-Check CodesPlanjery, Shiva Kumar January 2013 (has links)
The recent renaissance of one particular class of error-correcting codes called low-density parity-check (LDPC) codes has revolutionized the area of communications leading to the so-called field of modern coding theory. At the heart of this theory lies the fact that LDPC codes can be efficiently decoded by an iterative inference algorithm known as belief propagation (BP) which operates on a graphical model of a code. With BP decoding, LDPC codes are able to achieve an exceptionally good error-rate performance as they can asymptotically approach Shannon's capacity. However, LDPC codes under BP decoding suffer from the error floor phenomenon, an abrupt degradation in the error-rate performance of the code in the high signal-to-noise ratio region, which prevents the decoder from achieving very low error-rates. It arises mainly due to the sub-optimality of BP decoding on finite-length loopy graphs. Moreover, the effects of finite precision that stem from hardware realizations of BP decoding can further worsen the error floor phenomenon. Over the past few years, the error floor problem has emerged as one of the most important problems in coding theory with applications now requiring very low error rates and faster processing speeds. Further, addressing the error floor problem while taking finite precision into account in the decoder design has remained a challenge. In this dissertation, we introduce a new paradigm for finite precision iterative decoding of LDPC codes over the binary symmetric channel (BSC). These novel decoders, referred to as finite alphabet iterative decoders (FAIDs), are capable of surpassing the BP in the error floor region at a much lower complexity and memory usage than BP without any compromise in decoding latency. The messages propagated by FAIDs are not quantized probabilities or log-likelihoods, and the variable node update functions do not mimic the BP decoder. Rather, the update functions are simple maps designed to ensure a higher guaranteed error correction capability which improves the error floor performance. We provide a methodology for the design of FAIDs on column-weight-three codes. Using this methodology, we design 3-bit precision FAIDs that can surpass the BP (floating-point) in the error floor region on several column-weight-three codes of practical interest. While the proposed FAIDs are able to outperform the BP decoder with low precision, the analysis of FAIDs still proves to be a difficult issue. Furthermore, their achievable guaranteed error correction capability is still far from what is achievable by the optimal maximum-likelihood (ML) decoding. In order to address these two issues, we propose another novel class of decoders called decimation-enhanced FAIDs for LDPC codes. For this class of decoders, the technique of decimation is incorporated into the variable node update function of FAIDs. Decimation, which involves fixing certain bits of the code to a particular value during decoding, can significantly reduce the number of iterations required to correct a fixed number of errors while maintaining the good performance of a FAID, thereby making such decoders more amenable to analysis. We illustrate this for 3-bit precision FAIDs on column-weight-three codes and provide insights into the analysis of such decoders. We also show how decimation can be used adaptively to further enhance the guaranteed error correction capability of FAIDs that are already good on a given code. The new adaptive decimation scheme proposed has marginally added complexity but can significantly increase the slope of the error floor in the error-rate performance of a particular FAID. On certain high-rate column-weight-three codes of practical interest, we show that adaptive decimation-enhanced FAIDs can achieve a guaranteed error-correction capability that is close to the theoretical limit achieved by ML decoding.
|
8 |
Modern coding schemes for unequal error protectionDeetzen, Neele von January 2009 (has links)
Zugl.: Bremen, Univ., Diss., 2009
|
9 |
Iterative Codierungs- und Entzerrungsverfahren für die optische NachrichtenübertragungSchorr, Torsten January 2006 (has links)
Zugl.: Kaiserslautern, Techn. Univ., Diss., 2006
|
10 |
Optimized belief propagation decoding for low delay applications in digital communications /Hehn, Thorsten. January 2009 (has links)
Zugl.: Erlangen, Nürnberg, University, Diss., 2009.
|
Page generated in 0.0948 seconds