Spelling suggestions: "subject:"generalised weight polynomials"" "subject:"ageneralised weight polynomials""
1 |
On a posteriori probability decoding of linear block codes over discrete channelsGriffiths, Wayne Bradley January 2008 (has links)
One of the facets of the mobile or wireless environment is that errors quite often occur in bursts. Thus, strong codes are required to provide protection against such errors. This in turn motivates the employment of decoding algorithms which are simple to implement, yet are still able to attempt to take the dependence or memory of the channel model into account in order to give optimal decoding estimates. Furthermore, such algorithms should be able to be applied for a variety of channel models and signalling alphabets. The research presented within this thesis describes a number of algorithms which can be used with linear block codes. Given the received word, these algorithms determine the symbol which was most likely transmitted, on a symbol-by-symbol basis. Due to their relative simplicity, a collection of algorithms for memoryless channels is reported first. This is done to establish the general style and principles of the overall collection. The concept of matrix diagonalisation may or may not be applied, resulting in two different types of procedure. Ultimately, it is shown that the choice between them should be motivated by whether storage space or computational complexity has the higher priority. As with all other procedures explained herein, the derivation is first performed for a binary signalling alphabet and then extended to fields of prime order. These procedures form the paradigm for algorithms used in conjunction with finite state channel models, where errors generally occur in bursts. In such cases, the necessary information is stored in matrices rather than as scalars. Finally, by analogy with the weight polynomials of a code and its dual as characterised by the MacWilliams identities, new procedures are developed for particular types of Gilbert-Elliott channel models. Here, the calculations are derived from three parameters which profile the occurrence of errors in those models. The decoding is then carried out using polynomial evaluation rather than matrix multiplication. Complementing this theory are several examples detailing the steps required to perform the decoding, as well as a collection of simulation results demonstrating the practical value of these algorithms.
|
Page generated in 0.1141 seconds