• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 7
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 29
  • 29
  • 10
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Complete-MDP convolutional codes over the erasure channel

Tomás Estevan, Virtudes 20 July 2010 (has links)
No description available.
2

Quantum convolutional stabilizer codes

Chinthamani, Neelima 30 September 2004 (has links)
Quantum error correction codes were introduced as a means to protect quantum information from decoherance and operational errors. Based on their approach to error control, error correcting codes can be divided into two different classes: block codes and convolutional codes. There has been significant development towards finding quantum block codes, since they were first discovered in 1995. In contrast, quantum convolutional codes remained mainly uninvestigated. In this thesis, we develop the stabilizer formalism for quantum convolutional codes. We define distance properties of these codes and give a general method for constructing encoding circuits, given a set of generators of the stabilizer of a quantum convolutional stabilizer code, is shown. The resulting encoding circuit enables online encoding of the qubits, i.e., the encoder does not have to wait for the input transmission to end before starting the encoding process. We develop the quantum analogue of the Viterbi algorithm. The quantum Viterbi algorithm (QVA) is a maximum likehood error estimation algorithm, the complexity of which grows linearly with the number of encoded qubits. A variation of the quantum Viterbi algorithm, the Windowed QVA, is also discussed. Using Windowed QVA, we can estimate the most likely error without waiting for the entire received sequence.
3

Quantum convolutional stabilizer codes

Chinthamani, Neelima 30 September 2004 (has links)
Quantum error correction codes were introduced as a means to protect quantum information from decoherance and operational errors. Based on their approach to error control, error correcting codes can be divided into two different classes: block codes and convolutional codes. There has been significant development towards finding quantum block codes, since they were first discovered in 1995. In contrast, quantum convolutional codes remained mainly uninvestigated. In this thesis, we develop the stabilizer formalism for quantum convolutional codes. We define distance properties of these codes and give a general method for constructing encoding circuits, given a set of generators of the stabilizer of a quantum convolutional stabilizer code, is shown. The resulting encoding circuit enables online encoding of the qubits, i.e., the encoder does not have to wait for the input transmission to end before starting the encoding process. We develop the quantum analogue of the Viterbi algorithm. The quantum Viterbi algorithm (QVA) is a maximum likehood error estimation algorithm, the complexity of which grows linearly with the number of encoded qubits. A variation of the quantum Viterbi algorithm, the Windowed QVA, is also discussed. Using Windowed QVA, we can estimate the most likely error without waiting for the entire received sequence.
4

Dual domain decoding of high rate convolutional codes for iterative decoders

Srinivasan, Sudharshan January 2008 (has links)
This thesis addresses the problem of decoding high rate convolutional codes directly without resorting to puncturing. High rate codes are necessary for applications which require high bandwidth efficiency, like high data rate communication systems and magnet recording systems. Convolutional (rate k/n) codes, used as component codes for turbo codes, are preferred for their regular trellis structure and the resulting ease in decoding. However, the branch complexity of the (primal) code trellis increases exponentially with k for k/(k+1) codes, making decoding on the code trellis quickly impractical with increasing code rate. 'Puncturing' is the method traditionally used for generating high rate codes, which keeps the decoding complexity nearly the same for a wide range of code rates, since the same ?mother? code decoder is used at the receiver, while only the puncturing and depuncturing pattern is altered for changes in code rate. However, 'puncturing' puts a constraint in the search for the best possible high rate code, thereby resulting in a performance penalty, particularly at high SNRs.
5

Construction of ternary convolutional codes

Ansari, Muhammad Khizar 14 August 2019 (has links)
Error control coding is employed in modern communication systems to reliably transfer data through noisy channels. Convolutional codes are widely used for this purpose because they are easy to encode and decode and so have been employed in numerous communication systems. The focus of this thesis is a search for new and better ternary convolutional codes with large free distance so more errors can be detected and corrected. An algorithm is developed to obtain ternary convolutional codes (TCCs) with the best possible free distance. Tables are given of binary and ternary convolutional codes with the best free distance for rate 1/2 with encoder memory up to 14, rate 1/3 with encoder memory up to 9 and rate 1/4 with encoder memory up to 8. / Graduate
6

Power Characterization of a Gbit/s FPGA Convolutional LDPC Decoder

Li, Si-Yun January 2012 (has links)
In this thesis, we present an FPGA implementation of parallel-node low-density-parity-check convolutional-code (PN-LDPC-CC) encoder and decoder. A 2.4 Gbit/s rate-1/2 (3, 6) PN-LDPC-CC encoder and decoder were implemented on an Altera development and education board (DE4). Detailed power measurements of the FPGA board for various configurations of the design have been conducted to characterize the power consumption of the decoder module. For an Eb/N0 of 5 dB, the decoder with 9 processor cores (pipelined decoder iteration stages) has a bit-error-rate performance of 10E-10 and achieves an energy-per-coded-bit of 1.683 nJ based on raw power measurement results. The increase in Eb/N0 can effectively reduce the decoder power and energy-per-coded-bit for configurations with 5 or more processor cores for Eb/N0 < 5 dB. The incremental decoder power cost and incremental energy-per-coded-bit also hold a linearly decreasing trend for each additional processor core. Additional experiments are performed to account for the effect of the efficiency of the DC/DC converter circuitry on the raw power measurement data. Further experiments have also been conducted to quantify the effect of clipping thresholds, bit width for each processor core on bit-error-rate (BER) performance, power consumption, and logic utilization of the decoder. A “6Core" decoder with growing bit-width log-likelihood ratios (LLRs) has been found to have a BER performance near that of a “6Core" 6-bit decoder while consuming similar power, and logic utilization to that of a 5-bit “6Core" decoder.
7

Power Characterization of a Gbit/s FPGA Convolutional LDPC Decoder

Li, Si-Yun January 2012 (has links)
In this thesis, we present an FPGA implementation of parallel-node low-density-parity-check convolutional-code (PN-LDPC-CC) encoder and decoder. A 2.4 Gbit/s rate-1/2 (3, 6) PN-LDPC-CC encoder and decoder were implemented on an Altera development and education board (DE4). Detailed power measurements of the FPGA board for various configurations of the design have been conducted to characterize the power consumption of the decoder module. For an Eb/N0 of 5 dB, the decoder with 9 processor cores (pipelined decoder iteration stages) has a bit-error-rate performance of 10E-10 and achieves an energy-per-coded-bit of 1.683 nJ based on raw power measurement results. The increase in Eb/N0 can effectively reduce the decoder power and energy-per-coded-bit for configurations with 5 or more processor cores for Eb/N0 < 5 dB. The incremental decoder power cost and incremental energy-per-coded-bit also hold a linearly decreasing trend for each additional processor core. Additional experiments are performed to account for the effect of the efficiency of the DC/DC converter circuitry on the raw power measurement data. Further experiments have also been conducted to quantify the effect of clipping thresholds, bit width for each processor core on bit-error-rate (BER) performance, power consumption, and logic utilization of the decoder. A “6Core" decoder with growing bit-width log-likelihood ratios (LLRs) has been found to have a BER performance near that of a “6Core" 6-bit decoder while consuming similar power, and logic utilization to that of a 5-bit “6Core" decoder.
8

On lowering the error-floor of low-complexity turbo-codes

Blazek, Zeljko 26 November 2018 (has links)
Turbo-codes are a popular error correction method for applications requiring bit error rates from 10−3 to 10−6, such as wireless multimedia applications. In order to reduce the complexity of the turbo-decoder, it is advantageous to use the simplest possible constituent codes, such as 4-state recursive systematic convolutional (RSC) codes. However, for such codes, the error floor can be high, thus making them unable to achieve the target bit error range. In this dissertation, two methods of lowering the error floor are investigated. These methods are interleaver selection, and puncturing selective data bits. Through the use of appropriate code design criteria, various types of interleavers, and various puncturing parameters are evaluated. It was found that by careful selection of interleavers and puncturing parameters, a substantial reduction in the error floor can be achieved. From the various interleaver types investigated, the variable s-random type was found to provide the best performance. For the puncturing parameters, puncturing of both the data and parity bits of the turbo-code, as well as puncturing only the parity bits of the turbo-code, were considered. It was found that for applications requiring BERs around 10−3 , it is sufficient to only puncture the parity bits. However, for applications that require the full range of BER values, or for applications where the FER is the important design parameter, puncturing some of the data bits appears to be beneficial. / Graduate
9

Concatenation of Space-Time Block Codes with ConvolutionalCodes

Ali, Saajed 27 February 2004 (has links)
Multiple antennas help in combating the destructive effects of fading as well as improve the spectral efficiency of a communication system. Receive diversity techniques like maximal ratio receive combining have been popular means of introducing multiple antennas into communication systems. Space-time block codes present a way of introducing transmit diversity into the communication system with similar complexity and performance as maximal ratio receive combining. In this thesis we study the performance of space-time block codes in Rayleigh fading channel. In particular, the quasi-static assumption on the fading channel is removed to study how the space-time block coded system behaves in fast fading. In this context, the complexity versus performance trade-off for a space-time block coded receiver is studied. As a means to improve the performance of space-time block coded systems concatenation by convolutional codes is introduced. The improvement in the diversity order by the introduction of convolutional codes into the space-time block coded system is discussed. A general analytic expression for the error performance of a space-time block coded system is derived. This expression is utilized to obtain general expressions for the error performance of convolutionally concatenated space-time block coded systems utilizing both hard and soft decision decoding. Simulation results are presented and are compared with the analytical results. / Master of Science
10

Space-time Coded Modulation Design in Slow Fading

Elkhazin, Akrum 08 March 2010 (has links)
This dissertation examines multi-antenna transceiver design over flat-fading wireless channels. Bit Interleaved Coded Modulation (BICM) and MultiLevel Coded Modulation (MLCM) transmitter structures are considered, as well as the used of an optional spatial precoder under slow and quasi-static fading conditions. At the receiver, MultiStage Decoder (MSD) and Iterative Detection and Decoding (IDD) strategies are applied. Precoder, mapper and subcode designs are optimized for different receiver structures over the different antenna and fading scenarios. Under slow and quasi-static channel conditions, fade resistant multi-antenna transmission is achieved through a combination of linear spatial precoding and non-linear multi-dimensional mapping. A time-varying random unitary precoder is proposed, with significant performance gains over spatial interleaving. The fade resistant properties of multidimensional random mapping are also analyzed. For MLCM architectures, a group random labelling strategy is proposed for large antenna systems. The use of complexity constrained receivers in BICM and MLCM transmissions is explored. Two multi-antenna detectors are proposed based on a group detection strategy, whose complexity can be adjusted through the group size parameter. These detectors show performance gains over the the Minimum Mean Squared Error (MMSE)detector in spatially multiplexed systems having an excess number of transmitter antennas. A class of irregular convolutional codes is proposed for use in BICM transmissions. An irregular convolutional code is formed by encoding fractions of bits with different puncture patterns and mother codes of different memory. The code profile is designed with the aid of extrinsic information transfer charts, based on the channel and mapping function characteristics. In multi-antenna applications, these codes outperform convolutional turbo codes under independent and quasi-static fading conditions. For finite length transmissions, MLCM-MSD performance is affected by the mapping function. Labelling schemes such as set partitioning and multidimensional random labelling generate a large spread of subcode rates. A class of generalized Low Density Parity Check (LDPC) codes is proposed, to improve low-rate subcode performance. For MLCM-MSD transmissions, the proposed generalized LDPC codes outperform conventional LDPC code construction over a wide range of channels and design rates.

Page generated in 0.1287 seconds