• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 8
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

BCJR detection for GMSK modulation

Wu, Ching-Tang 02 September 2003 (has links)
CPM advantageous in spectral efficiency because of its continuity of the phase in modulation. One of the CPM example is GMSK, which has been applied to the wireless GSM system. The conventional demodulaton og CPM is achieved by Viterbi algorithm. This is because of the state transition structure for the dynamic description of phase of the CPM signal. Furthermore, the state transition can be presented by a trellis diagram, which can be efficiently solved by Viterbi algorithm based upon the strategy of selecting best survivor path to a maximum likelihood criterion. The best survivor path is measured by the Euclidean distance in modulation in this thesis. Another demodulation method proposed by us is the famous BCJR algorithm. BCJR which is based upon the posteriori probabilities is a alternative method for decoding the convolution code. We compare the BCJR and Viterbi algorithm for the demodulation of the GMSK system. Experiment results demonstrate that BCJR has a better error probability than the Viterbi algorithm. Also, we compare different GMSK system for different overlapping length and modulation index. The best combination of L and h suggested by pur experiments is the case of L=3, and h=3/4.
2

Non-iterative joint decoding and signal processing: universal coding approach for channels with memory

Nangare, Nitin Ashok 16 August 2006 (has links)
A non-iterative receiver is proposed to achieve near capacity performance on intersymbol interference (ISI) channels. There are two main ingredients in the proposed design. i) The use of a novel BCJR-DFE equalizer which produces optimal soft estimates of the inputs to the ISI channel given all the observations from the channel and L past symbols exactly, where L is the memory of the ISI channel. ii) The use of an encoder structure that ensures that L past symbols can be used in the DFE in an error free manner through the use of a capacity achieving code for a memoryless channel. Computational complexity of the proposed receiver structure is less than that of one iteration of the turbo receiver. We also provide the proof showing that the proposed receiver achieves the i.i.d. capacity of any constrained input ISI channel. This DFE-based receiver has several advantages over an iterative (turbo) receiver, such as low complexity, the fact that codes that are optimized for memoryless channels can be used with channels with memory, and finally that the channel does not need to be known at the transmitter. The proposed coding scheme is universal in the sense that a single code of rate r; optimized for a memoryless channel, provides small error probability uniformly across all AWGN-ISI channels of i.i.d. capacity less than r: This general principle of a proposed non-iterative receiver also applies to other signal processing functions, such as timing recovery, pattern-dependent noise whiten ing, joint demodulation and decoding etc. This makes the proposed encoder and receiver structure a viable alternative to iterative signal processing. The results show significant complexity reduction and performance gain for the case of timing recovery and patter-dependent noise whitening for magnetic recording channels.
3

MINIMALITY AND DUALITY OF TAIL-BITING TRELLISES FOR LINEAR CODES

Weaver, Elizabeth A. 01 January 2012 (has links)
Codes can be represented by edge-labeled directed graphs called trellises, which are used in decoding with the Viterbi algorithm. We will first examine the well-known product construction for trellises and present an algorithm for recovering the factors of a given trellis. To maximize efficiency, trellises that are minimal in a certain sense are desired. It was shown by Koetter and Vardy that one can produce all minimal tail-biting trellises for a code by looking at a special set of generators for a code. These generators along with a set of spans comprise what is called a characteristic pair, and we will discuss how to determine the number of these pairs for a given code. Finally, we will look at trellis dualization, in which a trellis for a code is used to produce a trellis representing the dual code. The first method we discuss comes naturally with the known BCJR construction. The second, introduced by Forney, is a very general procedure that works for many different types of graphs and is based on dualizing the edge set in a natural way. We call this construction the local dual, and we show the necessary conditions needed for these two different procedures to result in the same dual trellis.
4

Iterative equalization and decoding using reduced-state sequence estimation based soft-output algorithms

Tamma, Raja Venkatesh 30 September 2004 (has links)
We study and analyze the performance of iterative equalization and decoding (IED) using an M-BCJR equalizer. We use bit error rate (BER), frame error rate simulations and extrinsic information transfer (EXIT) charts to study and compare the performances of M-BCJR and BCJR equalizers on precoded and non-precoded channels. Using EXIT charts, the achievable channel capacities with IED using the BCJR, M-BCJR and MMSE LE equalizers are also compared. We predict the BER performance of IED using the M-BCJR equalizer from EXIT charts and explain the discrepancy between the observed and predicted performances by showing that the extrinsic outputs of the $M$-BCJR algorithm are not true logarithmic-likelihood ratios (LLR's). We show that the true LLR's can be estimated if the conditional distributions of the extrinsic outputs are known and finally we design a practical estimator for computing the true LLR's from the extrinsic outputs of the M-BCJR equalizer.
5

Reducing the complexity of equalisation and decoding of shingled writing

Abdulrazaq, Muhammad Bashir January 2017 (has links)
Shingled Magnetic Recording (SMR) technology is important in the immediate need for expansion of magnetic hard disk beyond the limit of current disk technology. SMR provides a solution with the least change from current technology among contending technologies. Robust easy to implement Digital Signal Processing (DSP) techniques are needed to achieve the potentials of SMR. Current DSP techniques proposed border on the usage of Two Dimensional Magnetic Recording (TDMR) techniques in equalisation and detection, coupled with iterative error correction codes such as Low Density Parity Check (LDPC). Currently, Maximum Likelihood (ML) algorithms are normally used in TDMR detection. The shortcomings of the ML detections used is the exponential complexities with respect to the number of bits. Because of that, reducing the complexity of the processes in SMR Media is very important in order to actualise the deployment of this technology to personal computers in the near future. This research investigated means of reducing the complexities of equalisation and detection techniques. Linear equalisers were found to be adequate for low density situations. Combining ML detector across-track with linear equaliser along-track was found to provide low complexity, better performing alternative as compared to use of linear equaliser across track with ML along track. This is achieved if density is relaxed along track and compressed more across track. A gain of up to 10dB was achieved. In a situation with high density in both dimensions, full two dimensional (2D) detectors provide better performance. Low complexity full 2D detector was formed by serially concatenating two ML detectors, one for each direction, instead of single 2D ML detector used in other literature. This reduces complexity with respect to side interference from exponential to linear. The use of a single bit parity as run length limited code at the same time error correction code is also presented with a small gain of about 1dB at BER of 10^-5 recorded for the situation of high density.
6

Joint Equalization and Decoding via Convex Optimization

Kim, Byung Hak 2012 May 1900 (has links)
The unifying theme of this dissertation is the development of new solutions for decoding and inference problems based on convex optimization methods. Th first part considers the joint detection and decoding problem for low-density parity-check (LDPC) codes on finite-state channels (FSCs). Hard-disk drives (or magnetic recording systems), where the required error rate (after decoding) is too low to be verifiable by simulation, are most important applications of this research. Recently, LDPC codes have attracted a lot of attention in the magnetic storage industry and some hard-disk drives have started using iterative decoding. Despite progress in the area of reduced-complexity detection and decoding algorithms, there has been some resistance to the deployment of turbo-equalization (TE) structures (with iterative detectors/decoders) in magnetic-recording systems because of error floors and the difficulty of accurately predicting performance at very low error rates. To address this problem for channels with memory, such as FSCs, we propose a new decoding algorithms based on a well-defined convex optimization problem. In particular, it is based on the linear-programing (LP) formulation of the joint decoding problem for LDPC codes over FSCs. It exhibits two favorable properties: provable convergence and predictable error-floors (via pseudo-codeword analysis). Since general-purpose LP solvers are too complex to make the joint LP decoder feasible for practical purposes, we develop an efficient iterative solver for the joint LP decoder by taking advantage of its dual-domain structure. The main advantage of this approach is that it combines the predictability and superior performance of joint LP decoding with the computational complexity of TE. The second part of this dissertation considers the matrix completion problem for the recovery of a data matrix from incomplete, or even corrupted entries of an unknown matrix. Recommender systems are good representatives of this problem, and this research is important for the design of information retrieval systems which require very high scalability. We show that our IMP algorithm reduces the well-known cold-start problem associated with collaborative filtering systems in practice.
7

Fpga Implementation Of Jointly Operating Channel Estimator And Parallelized Decoder

Kilcioglu, Caglar 01 September 2009 (has links) (PDF)
In this thesis, implementation details of a joint channel estimator and parallelized decoder structure on an FPGA-based platform is considered. Turbo decoders are used for the decoding process in this structure. However, turbo decoders introduce large decoding latencies since they operate in an iterative manner. To overcome that problem, parallelization is applied to the turbo codes and the resulting parallel decodable turbo code (PDTC) structure is employed for coding. The performance of a PDTC decoder and parameters affecting its performance is given on an additive white Gaussian noise (AWGN) channel. These results are compared with the results of a parallel study which employs a different architecture in implementing the PDTC decoder. In the fading channel case, a pilot symbol assisted estimation method is employed for the channel estimation process. In this method, the channel coefficients are estimated by a 2-way LMS (least mean-squares) algorithm. The difficulties in the implementation of this joint structure in a fixed-point arithmetic and the solutions to overcome these difficulties are described in details. The proposed joint structure is tested with varying design parameters over a Rayleigh fading channel. The overall decoding latencies and allowed data rates are calculated after obtaining a reasonable performance from the design.
8

Viterbi Decoded Linear Block Codes for Narrowband and Wideband Wireless Communication Over Mobile Fading Channels

Staphorst, Leonard 08 August 2005 (has links)
Since the frantic race towards the Shannon bound [1] commenced in the early 1950’s, linear block codes have become integral components of most digital communication systems. Both binary and non-binary linear block codes have proven themselves as formidable adversaries against the impediments presented by wireless communication channels. However, prior to the landmark 1974 paper [2] by Bahl et al. on the optimal Maximum a-Posteriori Probability (MAP) trellis decoding of linear block codes, practical linear block code decoding schemes were not only based on suboptimal hard decision algorithms, but also code-specific in most instances. In 1978 Wolf expedited the work of Bahl et al. by demonstrating the applicability of a block-wise Viterbi Algorithm (VA) to Bahl-Cocke-Jelinek-Raviv (BCJR) trellis structures as a generic optimal soft decision Maximum-Likelihood (ML) trellis decoding solution for linear block codes [3]. This study, largely motivated by code implementers’ ongoing search for generic linear block code decoding algorithms, builds on the foundations established by Bahl, Wolf and other contributing researchers by thoroughly evaluating the VA decoding of popular binary and non-binary linear block codes on realistic narrowband and wideband digital communication platforms in lifelike mobile environments. Ideally, generic linear block code decoding algorithms must not only be modest in terms of computational complexity, but they must also be channel aware. Such universal algorithms will undoubtedly be integrated into most channel coding subsystems that adapt to changing mobile channel conditions, such as the adaptive channel coding schemes of current Enhanced Data Rates for GSM Evolution (EDGE), 3rd Generation (3G) and Beyond 3G (B3G) systems, as well as future 4th Generation (4G) systems. In this study classic BCJR linear block code trellis construction is annotated and applied to contemporary binary and non-binary linear block codes. Since BCJR trellis structures are inherently sizable and intricate, rudimentary trellis complexity calculation and reduction algorithms are also presented and demonstrated. The block-wise VA for BCJR trellis structures, initially introduced by Wolf in [3], is revisited and improved to incorporate Channel State Information (CSI) during its ML decoding efforts. In order to accurately appraise the Bit-Error-Rate (BER) performances of VA decoded linear block codes in authentic wireless communication environments, Additive White Gaussian Noise (AWGN), flat fading and multi-user multipath fading simulation platforms were constructed. Included in this task was the development of baseband complex flat and multipath fading channel simulator models, capable of reproducing the physical attributes of realistic mobile fading channels. Furthermore, a complex Quadrature Phase Shift Keying (QPSK) system were employed as the narrowband communication link of choice for the AWGN and flat fading channel performance evaluation platforms. The versatile B3G multi-user multipath fading simulation platform, however, was constructed using a wideband RAKE receiver-based complex Direct Sequence Spread Spectrum Multiple Access (DS/SSMA) communication system that supports unfiltered and filtered Complex Spreading Sequences (CSS). This wideband platform is not only capable of analysing the influence of frequency selective fading on the BER performances of VA decoded linear block codes, but also the influence of the Multi-User Interference (MUI) created by other users active in the Code Division Multiple Access (CDMA) system. CSS families considered during this study include Zadoff-Chu (ZC) [4, 5], Quadriphase (QPH) [6], Double Sideband (DSB) Constant Envelope Linearly Interpolated Root-of- Unity (CE-LI-RU) filtered Generalised Chirp-like (GCL) [4, 7-9] and Analytical Bandlimited Complex (ABC) [7, 10] sequences. Numerous simulated BER performance curves, obtained using the AWGN, flat fading and multi-user multipath fading channel performance evaluation platforms, are presented in this study for various important binary and non-binary linear block code classes, all decoded using the VA. Binary linear block codes examined include Hamming and Bose-Chaudhuri-Hocquenghem (BCH) codes, whereas popular burst error correcting non-binary Reed-Solomon (RS) codes receive special attention. Furthermore, a simple cyclic binary linear block code is used to validate the viability of employing the reduced trellis structures produced by the proposed trellis complexity reduction algorithm. The simulated BER performance results shed light on the error correction capabilities of these VA decoded linear block codes when influenced by detrimental channel effects, including AWGN, Doppler spreading, diminished Line-of-Sight (LOS) signal strength, multipath propagation and MUI. It also investigates the impact of other pertinent communication system configuration alternatives, including channel interleaving, code puncturing, the quality of the CSI available during VA decoding, RAKE diversity combining approaches and CSS correlation characteristics. From these simulated results it can not only be gathered that the VA is an effective generic optimal soft input ML decoder for both binary and non-binary linear block codes, but also that the inclusion of CSI during VA metric calculations can fortify the BER performances of such codes beyond that attainable by classic ML decoding algorithms. / Dissertation (MEng(Electronic))--University of Pretoria, 2006. / Electrical, Electronic and Computer Engineering / unrestricted

Page generated in 0.0325 seconds