Spelling suggestions: "subject:"coding 1heory."" "subject:"coding btheory.""
671 |
Iterative Decoding Beyond Belief Propagation of Low-Density Parity-Check CodesPlanjery, Shiva Kumar January 2013 (has links)
The recent renaissance of one particular class of error-correcting codes called low-density parity-check (LDPC) codes has revolutionized the area of communications leading to the so-called field of modern coding theory. At the heart of this theory lies the fact that LDPC codes can be efficiently decoded by an iterative inference algorithm known as belief propagation (BP) which operates on a graphical model of a code. With BP decoding, LDPC codes are able to achieve an exceptionally good error-rate performance as they can asymptotically approach Shannon's capacity. However, LDPC codes under BP decoding suffer from the error floor phenomenon, an abrupt degradation in the error-rate performance of the code in the high signal-to-noise ratio region, which prevents the decoder from achieving very low error-rates. It arises mainly due to the sub-optimality of BP decoding on finite-length loopy graphs. Moreover, the effects of finite precision that stem from hardware realizations of BP decoding can further worsen the error floor phenomenon. Over the past few years, the error floor problem has emerged as one of the most important problems in coding theory with applications now requiring very low error rates and faster processing speeds. Further, addressing the error floor problem while taking finite precision into account in the decoder design has remained a challenge. In this dissertation, we introduce a new paradigm for finite precision iterative decoding of LDPC codes over the binary symmetric channel (BSC). These novel decoders, referred to as finite alphabet iterative decoders (FAIDs), are capable of surpassing the BP in the error floor region at a much lower complexity and memory usage than BP without any compromise in decoding latency. The messages propagated by FAIDs are not quantized probabilities or log-likelihoods, and the variable node update functions do not mimic the BP decoder. Rather, the update functions are simple maps designed to ensure a higher guaranteed error correction capability which improves the error floor performance. We provide a methodology for the design of FAIDs on column-weight-three codes. Using this methodology, we design 3-bit precision FAIDs that can surpass the BP (floating-point) in the error floor region on several column-weight-three codes of practical interest. While the proposed FAIDs are able to outperform the BP decoder with low precision, the analysis of FAIDs still proves to be a difficult issue. Furthermore, their achievable guaranteed error correction capability is still far from what is achievable by the optimal maximum-likelihood (ML) decoding. In order to address these two issues, we propose another novel class of decoders called decimation-enhanced FAIDs for LDPC codes. For this class of decoders, the technique of decimation is incorporated into the variable node update function of FAIDs. Decimation, which involves fixing certain bits of the code to a particular value during decoding, can significantly reduce the number of iterations required to correct a fixed number of errors while maintaining the good performance of a FAID, thereby making such decoders more amenable to analysis. We illustrate this for 3-bit precision FAIDs on column-weight-three codes and provide insights into the analysis of such decoders. We also show how decimation can be used adaptively to further enhance the guaranteed error correction capability of FAIDs that are already good on a given code. The new adaptive decimation scheme proposed has marginally added complexity but can significantly increase the slope of the error floor in the error-rate performance of a particular FAID. On certain high-rate column-weight-three codes of practical interest, we show that adaptive decimation-enhanced FAIDs can achieve a guaranteed error-correction capability that is close to the theoretical limit achieved by ML decoding.
|
672 |
Orthogonal frequency division multiplexing for digital broadcastingKim, Dukhyun 12 1900 (has links)
No description available.
|
673 |
Performance analysis of linear block codes over the queue-based channelAl-Lawati, Haider 29 August 2007 (has links)
Most coding schemes used in today's communication systems are designed for memoryless channels. These codes break down when they are transmitted over channels
with memory, which is in fact what real-world channels look like since errors often
occur in bursts. Therefore, these systems employ interleaving to spread the errors so
that the channel looks more or less memoryless (for the decoder) at the cost of added delay and complexity. In addition, they fail to exploit the memory of the channel which increases the capacity for a wide class of channels. On the other hand, most channels with memory do not have simple and mathematically tractable models, making the design of suitable channel codes more challenging and possibly not practical.
Recently, a new model has been proposed known as the queue-based channel (QBC)
which is simple enough for mathematical analysis and complex enough for modeling
wireless fading channels.
In this work, we examine the performance of linear block codes when transmitted
over this channel. We break down our focus into two parts. First, we investigate the
maximum likelihood decoding of binary linear block codes over the QBC. Since it
is well known that for binary symmetric memoryless channels, maximum likelihood
decoding reduces to minimum Hamming distance decoding, our objective here is to
explore whether there exists a similar relation between these two decoding schemes
when the channel does have memory. We give a partial answer for the case of perfect
and quasi perfect codes.
Next, we study Reed-Solomon (RS) codes and analyze their performance when transmitted over the QBC under the assumption of bounded distance decoding. In particular, we examine the two interleaving strategies encountered when dealing with non-binary codes over a binary input channel; namely, symbol interleaving and bit interleaving. We compare these two interleaving schemes analytically and show that symbol interleaving always outperforms bit interleaving. Non-interleaved Reed-Solomon
codes are also covered. We derive some useful expressions pertaining to the calculation of the probability of codeword error. The performance of non-interleaved RS codes are compared to that of interleaved ones for the simplest scenario of the QBC which is the additive (first-order) Markov noise channel with non-negative noise correlation. / Thesis (Master, Mathematics & Statistics) -- Queen's University, 2007-08-20 18:13:29.737
|
674 |
Combined turbo coding and interference rejection for DS-CDMA.Bejide, Emmanuel Oluremi. January 2004 (has links)
This dissertation presents interference cancellation techniques for both the Forward Error
Correction (FEC) coded and the uncoded Direct Sequence Code Division Multiple
Access (DS-CDMA) systems. Analytical models are also developed for the adaptive and
the non-adaptive Parallel Interference Cancellation (PlC) receivers. Results that are
obtained from the computer simulations of the PlC receiver types confirm the accuracy of
the analytical models that are developed. Results show that the Least Mean Square
(LMS) algorithm based adaptive PlC receivers have bit error rate performances that are
better than those of the non-adaptive PlC receivers.
In the second part of this dissertation, a novel iterative multiuser detector for the Turbo
coded DS-CDMA system is developed. The performance of the proposed receiver in the
multirate CDMA system is also investigated. The developed receiver is found to have an
error rate performance that is very close to the single user limit after a few numbers of
iterations. The receiver is also resilient against the near-far effect. A methodology is also
presented on the use of the Gaussian approximation method in the convergence analysis
of iterative interference cancellation receivers for turbo coded DS-CDMA systems. / Thesis (Ph.D.)-University of KwaZulu-Natal, Durban, 2004.
|
675 |
Combined turbo coding and interference rejection for DS-CDMA.Bejide, Emmanuel Oluremi. January 2004 (has links)
This dissertation presents interference cancellation techniques for both the Forward Error
Correction (FEC) coded and the uncoded Direct Sequence Code Division Multiple
Access (DS-CDMA) systems. Analytical models are also developed for the adaptive and
the non-adaptive Parallel Interference Cancellation (PlC) receivers. Results that are
obtained from the computer simulations of the PlC receiver types confirm the accuracy of
the analytical models that are developed. Results show that the Least Mean Square
(LMS) algorithm based adaptive PlC receivers have bit error rate performances that are
better than those of the non-adaptive PlC receivers.
In the second part of this dissertation, a novel iterative multiuser detector for the Turbo
coded DS-CDMA system is developed. The performance of the proposed receiver in the
multirate CDMA system is also investigated. The developed receiver is found to have an
error rate performance that is very close to the single user limit after a few numbers of
iterations. The receiver is also resilient against the near-far effect. A methodology is also
presented on the use of the Gaussian approximation method in the convergence analysis
of iterative interference cancellation receivers for turbo coded DS-CDMA systems. / Thesis (Ph.D.)-University of KwaZulu-Natal, Durban, 2004.
|
676 |
Codec for multimedia services using wavelets and fractals.Brijmohan, Yarish. January 2004 (has links)
Increase in technological advancements in fields of telecommunications, computers and
television have prompted the need to exchange video, image and audio files between people.
Transmission of such files finds numerous multimedia applications such as, internet multimedia,
video conferencing, videophone, etc. However, the transmission and rece-ption of these files are
limited by the available bandwidth as well as storage capacities of systems. Thus there is a need
to develop compression systems, such that required multimedia applications can operate within
these limited capacities.
This dissertation presents two well established coding approaches that are used in modern' image
and video compression systems. These are the wavelet and fractal methods. The wavelet based
coder, which adopts the transform coding paradigm, performs the discrete wavelet transform on
an image before any compression algorithms are implemented. The wavelet transform provides
good energy compaction and decorrelating properties that make it suited for compression.
Fractal compression systems on the other hand differ from the traditional transform coders.
These algorithms are based on the theory of iterated function systems and take advantage of
local self-similarities present in images. In this dissertation, we first review the theoretical
foundations of both wavelet and fractal coders. Thereafter we evaluate different wavelet and
fractal based compression algorithms, and assess the strengths and weakness in each case.
Due to the short-comings of fractal based compression schemes, such as the tiling effect
appearing in reconstructed images, a wavelet based analysis of fractal image compression is
presented. This is the link that produces fractal coding in the wavelet domain, and presents a
hybrid coding scheme called fractal-wavelet coders. We show that by using smooth wavelet
basis in computing the wavelet transform, the tiling effect of fractal systems can be removed.
The few wavelet-fractal coders that have been proposed in literature are discussed, showing
advantages over the traditional fractal coders.
This dissertation will present a new low-bit rate video compression system that is based on
fractal coding in the wavelet domain. This coder makes use of the advantages of both the
wavelet and fractal coders discussed in their review. The self-similarity property of fractal
coders exploits the high spatial and temporal correlation between video frames. Thus the fractal
coding step gives an approximate representation of the coded frame, while the wavelet
technique adds detail to the frame. In this proposed scheme, each frame is decomposed using
the pyramidal multi-resolution wavelet transform. Thereafter a motion detection operation is used in which the subtrees are partitioned into motion and non-motion subtrees. The nonmotion
subtrees are easily coded by a binary decision, whereas the moving ones are coded using
the combination of the wavelet SPIHT and fractal variable subtree size coding scheme. All
intra-frame compression is performed using the SPIHT compression algorithm and inter-frame
using the fractal-wavelet method described above.
The proposed coder is then compared to current low bit-rate video coding standards such as the
H.263+ and MPEG-4 coders through analysis and simulations. Results show that the proposed
coder is competitive with the current standards, with a performance improvement been shown in
video sequences that do not posses large global motion. Finally, a real-time implementation of
the proposed algorithm is performed on a digital signal processor. This illustrates the suitability
of the proposed coder being applied to numerous multimedia applications. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2004.
|
677 |
Interleaved concalenated coding for input-constrained channelsAnim-Appiah, Kofi Dankwa 12 1900 (has links)
No description available.
|
678 |
Scalable video coding using spatio-temporal interpolationBayrakeri, Sadik 05 1900 (has links)
No description available.
|
679 |
Very low bit rate video coding using adaptive nonuniform sampling and matching pursuitIndra, Isara 12 1900 (has links)
No description available.
|
680 |
Design of filter banks for subband coding systemsAlexandrou, Alexandros. January 1985 (has links)
No description available.
|
Page generated in 0.0425 seconds