• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 18
  • 14
  • 7
  • 1
  • Tagged with
  • 87
  • 87
  • 27
  • 24
  • 24
  • 21
  • 20
  • 20
  • 19
  • 18
  • 16
  • 14
  • 14
  • 14
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Low-density parity-check codes : construction and implementation.

Malema, Gabofetswe Alafang January 2007 (has links)
Low-density parity-check (LDPC) codes have been shown to have good error correcting performance approaching Shannon’s limit. Good error correcting performance enables efficient and reliable communication. However, a LDPC code decoding algorithm needs to be executed efficiently to meet cost, time, power and bandwidth requirements of target applications. The constructed codes should also meet error rate performance requirements of those applications. Since their rediscovery, there has been much research work on LDPC code construction and implementation. LDPC codes can be designed over a wide space with parameters such as girth, rate and length. There is no unique method of constructing LDPC codes. Existing construction methods are limited in some way in producing good error correcting performing and easily implementable codes for a given rate and length. There is a need to develop methods of constructing codes over a wide range of rates and lengths with good performance and ease of hardware implementability. LDPC code hardware design and implementation depend on the structure of target LDPC code and is also as varied as LDPC matrix designs and constructions. There are several factors to be considered including decoding algorithm computations,processing nodes interconnection network, number of processing nodes, amount of memory, number of quantization bits and decoding delay. All of these issues can be handled in several different ways. This thesis is about construction of LDPC codes and their hardware implementation. LDPC code construction and implementation issues mentioned above are too many to be addressed in one thesis. The main contribution of this thesis is the development of LDPC code construction methods for some classes of structured LDPC codes and techniques for reducing decoding time. We introduce two main methods for constructing structured codes. In the first method, column-weight two LDPC codes are derived from distance graphs. A wide range of girths, rates and lengths are obtained compared to existing methods. The performance and implementation complexity of obtained codes depends on the structure of their corresponding distance graphs. In the second method, a search algorithm based on bit-filing and progressive-edge growth algorithms is introduced for constructing quasi-cyclic LDPC codes. The algorithm can be used to form a distance or Tanner graph of a code. This method could also obtain codes over a wide range of parameters. Cycles of length four are avoided by observing the row-column constraint. Row-column connections observing this condition are searched sequentially or randomly. Although the girth conditions are not sufficient beyond six, larger girths codes were easily obtained especially at low rates. The advantage of this algorithm compared to other methods is its flexibility. It could be used to construct codes for a given rate and length with girths of at least six for any sub-matrix configuration or rearrangement. The code size is also easily varied by increasing or decreasing sub-matrix size. Codes obtained using a sequential search criteria show poor performance at low girths (6 and 8) while random searches result in good performing codes. Quasi-cyclic codes could be implemented in a variety of decoder architectures. One of the many options is the choice of processing nodes interconnect. We show how quasi-cyclic codes processing could be scheduled through a multistage network. Although these net-works have more delay than other modes of communication, they offer more flexibility at a reasonable cost. Banyan and Benes networks are suggested as the most suitable networks. Decoding delay is also one of several issues considered in decoder design and implementation. In this thesis, we overlap check and variable node computations to reduce decoding time. Three techniques are discussed, two of which are introduced in this thesis. The techniques are code matrix permutation, matrix space restriction and sub-matrix row-column scheduling. Matrix permutation rearranges the parity-check matrix such that rows and columns that do not have connections in common are separated. This techniques can be applied to any matrix. Its effectiveness largely depends on the structure of the code. We show that its success also depends on the size of row and column weights. Matrix space restriction is another technique that can be applied to any code and has fixed reduction in time or amount of overlap. Its success depends on the amount of restriction and may be traded with performance loss. The third technique already suggested in literature relies on the internal cyclic structure of sub-matrices to achieve overlapping. The technique is limited to LDPC code matrices in which the number of sub-matrices is equal to row and column weights. We show that it can be applied to other codes with a lager number of sub-matrices than code weights. However, in this case maximum overlap is not guaranteed. We calculate the lower bound on the amount of overlapping. Overlapping could be applied to any sub-matrix configuration of quasi-cyclic codes by arbitrarily choosing the starting rows for processing. Overlapping decoding time depends on inter-iteration waiting times. We show that there are upper bounds on waiting times which depend on the code weights. Waiting times could be further reduced by restricting shifts in identity sub-matrices or using smaller sub-matrices. This overlapping technique can reduce the decoding time by up to 50% compared to conventional message and computation scheduling. Techniques of matrix permutation and space restriction results in decoder architectures that are flexible in LDPC code design in terms of code weights and size. This is due to the fact that with these techniques, rows and columns are processed in sequential order to achieve overlapping. However, in the existing technique, all sub-matrices have to be processed in parallel to achieve overlapping. Parallel processing of all code sub-matrices requires the architecture to have the number of processing units at least equal to the number sub-matrices. Processing units and memory space should therefore be distributed among the sub-matrices according to the sub-matrices arrangement. This leads to high complexity or inflexibility in the decoder architecture. We propose a simple, programmable and high throughput decoder architecture based on matrix permutation and space restriction techniques. / Thesis(Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2007
42

Joint Equalization and Decoding via Convex Optimization

Kim, Byung Hak 2012 May 1900 (has links)
The unifying theme of this dissertation is the development of new solutions for decoding and inference problems based on convex optimization methods. Th first part considers the joint detection and decoding problem for low-density parity-check (LDPC) codes on finite-state channels (FSCs). Hard-disk drives (or magnetic recording systems), where the required error rate (after decoding) is too low to be verifiable by simulation, are most important applications of this research. Recently, LDPC codes have attracted a lot of attention in the magnetic storage industry and some hard-disk drives have started using iterative decoding. Despite progress in the area of reduced-complexity detection and decoding algorithms, there has been some resistance to the deployment of turbo-equalization (TE) structures (with iterative detectors/decoders) in magnetic-recording systems because of error floors and the difficulty of accurately predicting performance at very low error rates. To address this problem for channels with memory, such as FSCs, we propose a new decoding algorithms based on a well-defined convex optimization problem. In particular, it is based on the linear-programing (LP) formulation of the joint decoding problem for LDPC codes over FSCs. It exhibits two favorable properties: provable convergence and predictable error-floors (via pseudo-codeword analysis). Since general-purpose LP solvers are too complex to make the joint LP decoder feasible for practical purposes, we develop an efficient iterative solver for the joint LP decoder by taking advantage of its dual-domain structure. The main advantage of this approach is that it combines the predictability and superior performance of joint LP decoding with the computational complexity of TE. The second part of this dissertation considers the matrix completion problem for the recovery of a data matrix from incomplete, or even corrupted entries of an unknown matrix. Recommender systems are good representatives of this problem, and this research is important for the design of information retrieval systems which require very high scalability. We show that our IMP algorithm reduces the well-known cold-start problem associated with collaborative filtering systems in practice.
43

Exit charts based analysis and design of rateless codes for the erasure and Gaussian channels

Mothi Venkatesan, Sabaresan 02 June 2009 (has links)
Luby Transform Codes were the first class of universal erasure codes introduced to fully realize the concept of scalable and fault‐tolerant distribution of data over computer networks, also called Digital Fountain. Later Raptor codes, a generalization of the LT codes were introduced to trade off complexity with performance. In this work, we show that an even broader class of codes exists that are near optimal for the erasure channel and that the Raptor codes form a special case. More precisely, Raptorlike codes can be designed based on an iterative (joint) decoding schedule wherein information is transferred between the LT decoder and an outer decoder in an iterative manner. The design of these codes can be formulated as a LP problem using EXIT Charts and density evolution. In our work, we show the existence of codes, other than the Raptor codes, that perform as good as the existing ones. We extend this framework of joint decoding of the component codes to the additive white Gaussian noise channels and introduce the design of Rateless codes for these channels. Under this setting, for asymptotic lengths, it is possible to design codes that work for a class of channels defined by the signal‐to‐noise ratio. In our work, we show that good profiles can be designed using density evolution and Gaussian approximation. EXIT charts prove to be an intuitive tool and aid in formulating the code design problem as a LP problem. EXIT charts are not exact because of the inherent approximations. Therefore, we use density evolution to analyze the performance of these codes. In the Gaussian case, we show that for asymptotic lengths, a range of designs of Rateless codes exists to choose from based on the required complexity and the overhead. Moreover, under this framework, we can design incrementally redundant schemes for already existing outer codes to make the communication system more robust to channel noise variations.
44

Performance Comparison Of Message Passing Decoding Algorithms For Binary And Non-binary Low Density Parity Check (ldpc) Codes

Uzunoglu, Cihan 01 December 2007 (has links) (PDF)
In this thesis, we investigate the basics of Low-Density Parity-Check (LDPC) codes over binary and non-binary alphabets. We especially focus on the message passing decoding algorithms, which have different message definitions such as a posteriori probabilities, log-likelihood ratios and Fourier transforms of probabilities. We present the simulation results that compare the performances of small block length binary and non-binary LDPC codes, which have regular and irregular structures over GF(2),GF(4) and GF(8) alphabets. We observe that choosing non-binary alphabets improve the performance with careful selection of mean column weight by comparing LDPC codes with variable node degrees of 3, 2.8 and 2.6, since it is effective in the order of GF(2), GF(4) and GF(8) performances.
45

On applications of puncturing in error-correction coding

Klinc, Demijan 05 April 2011 (has links)
This thesis investigates applications of puncturing in error-correction coding and physical layer security with an emphasis on binary and non-binary LDPC codes. Theoretical framework for the analysis of punctured binary LDPC codes at short block lengths is developed and a novel decoding scheme is designed that achieves considerably faster convergence than conventional approaches. Subsequently, optimized puncturing and shortening is studied for non-binary LDPC codes over binary input channels. Framework for the analysis of punctured/shortened non-binary LDPC codes over the BEC channel is developed, which enables the optimization of puncturing and shortening patterns. Insight from this analysis is used to develop algorithms for puncturing and shortening of non-binary LDPC codes at finite block lengths that perform well. It is confirmed that symbol-wise puncturing is generally bad and that bit-wise punctured non-binary LDPC codes can significantly outperform their binary counterparts, thus making them an attractive solution for future communication systems; both for error-correction and distributed compression. Puncturing is also considered in the context of physical layer security. It is shown that puncturing can be used effectively for coding over the wiretap channel to hide the message bits from eavesdroppers. Further, it is shown how puncturing patterns can be optimized for enhanced secrecy. Asymptotic analysis confirms that eavesdroppers are forced to operate at BERs very close to 0.5, even if their signal is only slightly worse than that of the legitimate receivers. The proposed coding scheme is naturally applicable at finite block lengths and allows for efficient, almost-linear time encoding. Finally, it is shown how error-correcting codes can be used to solve an open problem of compressing data encrypted with block ciphers such as AES. Coding schemes for multiple chaining modes are proposed and it is verified that considerable compression gains are attainable for binary sources.
46

Coded Modulation for High Speed Optical Transport Networks

Batshon, Hussam George January 2010 (has links)
At a time where almost 1.75 billion people around the world use the Internet on a regular basis, optical communication over optical fibers that is used in long distance and high demand applications has to be capable of providing higher communication speed and re-liability. In recent years, strong demand is driving the dense wavelength division multip-lexing network upgrade from 10 Gb/s per channel to more spectrally-efficient 40 Gb/s or 100 Gb/s per wavelength channel, and beyond. The 100 Gb/s Ethernet is currently under standardization, and in a couple of years 1 Tb/s Ethernet is going to be standardized as well for different applications, such as the local area networks (LANs) and the wide area networks (WANs). The major concern about such high data rates is the degradation in the signal quality due to linear and non-linear impairments, in particular polarization mode dispersion (PMD) and intrachannel nonlinearities. Moreover, the higher speed transceivers are expensive, so the alternative approaches of achieving the required rates is preferably done using commercially available components operating at lower speeds.In this dissertation, different LDPC-coded modulation techniques are presented to offer a higher spectral efficiency and/or power efficiency, in addition to offering aggregate rates that can go up to 1Tb/s per wavelength. These modulation formats are based on the bit-interleaved coded modulation (BICM) and include: (i) three-dimensional LDPC-coded modulation using hybrid direct and coherent detection, (ii) multidimensional LDPC-coded modulation, (iii) subcarrier-multiplexed four-dimensional LDPC-coded modulation, (iv) hybrid subcarrier/amplitude/phase/polarization LDPC-coded modulation, and (v) iterative polar quantization based LDPC-coded modulation.
47

Physical-layer security

Bloch, Matthieu 05 May 2008 (has links)
As wireless networks continue to flourish worldwide and play an increasingly prominent role, it has become crucial to provide effective solutions to the inherent security issues associated with a wireless transmission medium. Unlike traditional solutions, which usually handle security at the application layer, the primary concern of this thesis is to analyze and develop solutions based on coding techniques at the physical layer. First, an information-theoretically secure communication protocol for quasi-static fading channels was developed and its performance with respect to theoretical limits was analyzed. A key element of the protocol is a reconciliation scheme for secret-key agreement based on low-density parity-check codes, which is specifically designed to operate on non-binary random variables and offers high reconciliation efficiency. Second, the fundamental trade-offs between cooperation and security were analyzed by investigating the transmission of confidential messages to cooperative relays. This information-theoretic study highlighted the importance of jamming as a means to increase secrecy and confirmed the importance of carefully chosen relaying strategies. Third, other applications of physical-layer security were investigated. Specifically, the use of secret-key agreement techniques for alternative cryptographic purposes was analyzed, and a framework for the design of practical information-theoretic commitment protocols over noisy channels was proposed. Finally, the benefit of using physical-layer coding techniques beyond the physical layer was illustrated by studying security issues in client-server networks. A coding scheme exploiting packet losses at the network layer was proposed to ensure reliable communication between clients and servers and security against colluding attackers.
48

Low-density parity-check codes : construction and implementation.

Malema, Gabofetswe Alafang January 2007 (has links)
Low-density parity-check (LDPC) codes have been shown to have good error correcting performance approaching Shannon’s limit. Good error correcting performance enables efficient and reliable communication. However, a LDPC code decoding algorithm needs to be executed efficiently to meet cost, time, power and bandwidth requirements of target applications. The constructed codes should also meet error rate performance requirements of those applications. Since their rediscovery, there has been much research work on LDPC code construction and implementation. LDPC codes can be designed over a wide space with parameters such as girth, rate and length. There is no unique method of constructing LDPC codes. Existing construction methods are limited in some way in producing good error correcting performing and easily implementable codes for a given rate and length. There is a need to develop methods of constructing codes over a wide range of rates and lengths with good performance and ease of hardware implementability. LDPC code hardware design and implementation depend on the structure of target LDPC code and is also as varied as LDPC matrix designs and constructions. There are several factors to be considered including decoding algorithm computations,processing nodes interconnection network, number of processing nodes, amount of memory, number of quantization bits and decoding delay. All of these issues can be handled in several different ways. This thesis is about construction of LDPC codes and their hardware implementation. LDPC code construction and implementation issues mentioned above are too many to be addressed in one thesis. The main contribution of this thesis is the development of LDPC code construction methods for some classes of structured LDPC codes and techniques for reducing decoding time. We introduce two main methods for constructing structured codes. In the first method, column-weight two LDPC codes are derived from distance graphs. A wide range of girths, rates and lengths are obtained compared to existing methods. The performance and implementation complexity of obtained codes depends on the structure of their corresponding distance graphs. In the second method, a search algorithm based on bit-filing and progressive-edge growth algorithms is introduced for constructing quasi-cyclic LDPC codes. The algorithm can be used to form a distance or Tanner graph of a code. This method could also obtain codes over a wide range of parameters. Cycles of length four are avoided by observing the row-column constraint. Row-column connections observing this condition are searched sequentially or randomly. Although the girth conditions are not sufficient beyond six, larger girths codes were easily obtained especially at low rates. The advantage of this algorithm compared to other methods is its flexibility. It could be used to construct codes for a given rate and length with girths of at least six for any sub-matrix configuration or rearrangement. The code size is also easily varied by increasing or decreasing sub-matrix size. Codes obtained using a sequential search criteria show poor performance at low girths (6 and 8) while random searches result in good performing codes. Quasi-cyclic codes could be implemented in a variety of decoder architectures. One of the many options is the choice of processing nodes interconnect. We show how quasi-cyclic codes processing could be scheduled through a multistage network. Although these net-works have more delay than other modes of communication, they offer more flexibility at a reasonable cost. Banyan and Benes networks are suggested as the most suitable networks. Decoding delay is also one of several issues considered in decoder design and implementation. In this thesis, we overlap check and variable node computations to reduce decoding time. Three techniques are discussed, two of which are introduced in this thesis. The techniques are code matrix permutation, matrix space restriction and sub-matrix row-column scheduling. Matrix permutation rearranges the parity-check matrix such that rows and columns that do not have connections in common are separated. This techniques can be applied to any matrix. Its effectiveness largely depends on the structure of the code. We show that its success also depends on the size of row and column weights. Matrix space restriction is another technique that can be applied to any code and has fixed reduction in time or amount of overlap. Its success depends on the amount of restriction and may be traded with performance loss. The third technique already suggested in literature relies on the internal cyclic structure of sub-matrices to achieve overlapping. The technique is limited to LDPC code matrices in which the number of sub-matrices is equal to row and column weights. We show that it can be applied to other codes with a lager number of sub-matrices than code weights. However, in this case maximum overlap is not guaranteed. We calculate the lower bound on the amount of overlapping. Overlapping could be applied to any sub-matrix configuration of quasi-cyclic codes by arbitrarily choosing the starting rows for processing. Overlapping decoding time depends on inter-iteration waiting times. We show that there are upper bounds on waiting times which depend on the code weights. Waiting times could be further reduced by restricting shifts in identity sub-matrices or using smaller sub-matrices. This overlapping technique can reduce the decoding time by up to 50% compared to conventional message and computation scheduling. Techniques of matrix permutation and space restriction results in decoder architectures that are flexible in LDPC code design in terms of code weights and size. This is due to the fact that with these techniques, rows and columns are processed in sequential order to achieve overlapping. However, in the existing technique, all sub-matrices have to be processed in parallel to achieve overlapping. Parallel processing of all code sub-matrices requires the architecture to have the number of processing units at least equal to the number sub-matrices. Processing units and memory space should therefore be distributed among the sub-matrices according to the sub-matrices arrangement. This leads to high complexity or inflexibility in the decoder architecture. We propose a simple, programmable and high throughput decoder architecture based on matrix permutation and space restriction techniques. / Thesis(Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2007
49

LDPC kódy / LDPC codes

Hrouza, Ondřej January 2012 (has links)
The aim of this thesis are problematics about LDPC codes. There are described metods to create parity check matrix, where are important structured metods using finite geometry: Euclidean geometry and projectice geometry. Next area in this thesis is decoding LDPC codes. There are presented four metods: Hard-Decision algorithm, Bit-Flipping algorithm, The Sum-Product algorithm and Log Likelihood algorithm, where is mainly focused on iterative decoding methods. Practical output of this work is program LDPC codes created in environment Matlab. The program is divided to two parts -- Practise LDPC codes and Simulation LDPC codes. The result reached by program Simulation LDPC codes is used to create a comparison of creating and decoding methods LDPC codes. For comparison of decoding methods LDPC codes were used BER characteristics and time dependence each method on various parameters LDPC code (number of iteration or size of parity matrix).
50

Circular Trellis based Low Density Parity Check Codes

Anitei, Irina 19 December 2008 (has links)
No description available.

Page generated in 0.0407 seconds