In a wireless communication system the transmitted information is subjected to a number of impairments,
among which inter-symbol interference (ISI), thermal noise and fading are the most prevalent.
Owing to the dispersive nature of the communication channel, ISI results from the arrival of multiple
delayed copies of the transmitted signal at the receiver. Thermal noise is caused by the random
fluctuation on electrons in the receiver hardware, while fading is the result of constructive and destructive
interference, as well as absorption during transmission. To protect the source information,
error-correction coding (ECC) is performed in the transmitter, after which the coded information is
interleaved in order to separate the information to be transmitted temporally.
Turbo equalization (TE) is a technique whereby equalization (to correct ISI) and decoding (to correct
errors) are iteratively performed by iteratively exchanging extrinsic information formed by optimal
posterior probabilistic information produced by each algorithm. The extrinsic information determined
from the decoder output is used as prior information by the equalizer, and vice versa, allowing for
the bit-error rate (BER) performance to be improved with each iteration. Turbo equalization achieves
excellent BER performance, but its computational complexity grows exponentially with an increase in
channel memory as well as with encoder memory, and can therefore not be used in dispersive channels
where the channel memory is large. A number of low complexity equalizers have consequently been developed to replace the maximum a posteriori probability (MAP) equalizer in order to reduce the
complexity. Some of the resulting low complexity turbo equalizers achieve performance comparable
to that of a conventional turbo equalizer that uses a MAP equalizer. In other cases the low complexity
turbo equalizers perform much worse than the corresponding conventional turbo equalizer (CTE)
because of suboptimal equalization and the inability of the low complexity equalizers to utilize the
extrinsic information effectively as prior information.
In this thesis the author develops two novel iterative low complexity turbo equalizers. The turbo equalization
problem is modeled on superstructures, where, in the context of this thesis, a superstructure
performs the task of the equalizer and the decoder. The resulting low complexity turbo equalizers
process all the available information as a whole, so there is no exchange of extrinsic information
between different subunits. The first is modeled on a dynamic Bayesian network (DBN) modeling
the Turbo Equalization problem as a quasi-directed acyclic graph, by allowing a dominant connection
between the observed variables and their corresponding hidden variables, as well as weak connections
between the observed variables and past and future hidden variables. The resulting turbo equalizer is
named the dynamic Bayesian network turbo equalizer (DBN-TE). The second low complexity turbo
equalizer developed in this thesis is modeled on a Hopfield neural network, and is named the Hopfield
neural network turbo equalizer (HNN-TE). The HNN-TE is an amalgamation of the HNN maximum
likelihood sequence estimation (MLSE) equalizer, developed previously by this author, and an HNN
MLSE decoder derived from a single codeword HNN decoder. Both the low complexity turbo equalizers
developed in this thesis are able to jointly and iteratively equalize and decode coded, randomly interleaved information transmitted through highly dispersive multipath channels. The performance of both these low complexity turbo equalizers is comparable to that of the conventional
turbo equalizer while their computational complexities are superior for channels with long
memory. Their performance is also comparable to that of other low complexity turbo equalizers, but
their computational complexities are worse. The computational complexity of both the DBN-TE and
the HNN-TE is approximately quadratic at best (and cubic at worst) in the transmitted data block
length, exponential in the encoder constraint length and approximately independent of the channel
memory length. The approximate quadratic complexity of both the DBN-TE and the HNN-TE is
mostly due to interleaver mitigation, requiring matrix multiplication, where the matrices have dimensions
equal to the data block length, without which turbo equalization using superstructures is
impossible for systems employing random interleavers. / Thesis (PhD)--University of Pretoria, 2013. / gm2013 / Electrical, Electronic and Computer Engineering / unrestricted
Identifer | oai:union.ndltd.org:netd.ac.za/oai:union.ndltd.org:up/oai:repository.up.ac.za:2263/32814 |
Date | January 2013 |
Creators | Myburgh, Hermanus Carel |
Contributors | Olivier, Jan Corne |
Source Sets | South African National ETD Portal |
Language | English |
Detected Language | English |
Type | Thesis |
Rights | © 2013 University of Pretoria. All rights reserved. The copyright in this work vests in the University of Pretoria. No part of this work may be reproduced or transmitted in any form or by any means, without the prior written permission of the University of Pretoria. |
Page generated in 0.0027 seconds