• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 2
  • Tagged with
  • 13
  • 13
  • 7
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Minimum weight decoding

Taleb, Farshid January 1989 (has links)
No description available.
2

Practical error control techniques for transmission over noisy channels

Martin, Ian January 1998 (has links)
No description available.
3

Decoding algorithms of Reed-Solomon code

Czynszak, Szymon January 2011 (has links)
Reed-Solomon code is nowadays broadly used in many fields of data transmission. Using of error correction codes is divided into two main operations: information coding before sending information into communication channel and decoding received information at the other side. There are vast of decoding algorithms of Reed-Solomon codes, which have specific features. There is needed knowledge of features of algorithms to choose correct algorithm which satisfies requirements of system. There are evaluated cyclic decoding algorithm, Peterson-Gorenstein-Zierler algorithm, Berlekamp-Massey algorithm, Sugiyama algorithm with erasures and without erasures and Guruswami-Sudan algorithm. There was done implementation of algorithms in software and in hardware. Simulation of implemented algorithms was performed. Algorithms were evaluated and there were proposed methods to improve their work.
4

Performance Comparison Of Message Passing Decoding Algorithms For Binary And Non-binary Low Density Parity Check (ldpc) Codes

Uzunoglu, Cihan 01 December 2007 (has links) (PDF)
In this thesis, we investigate the basics of Low-Density Parity-Check (LDPC) codes over binary and non-binary alphabets. We especially focus on the message passing decoding algorithms, which have different message definitions such as a posteriori probabilities, log-likelihood ratios and Fourier transforms of probabilities. We present the simulation results that compare the performances of small block length binary and non-binary LDPC codes, which have regular and irregular structures over GF(2),GF(4) and GF(8) alphabets. We observe that choosing non-binary alphabets improve the performance with careful selection of mean column weight by comparing LDPC codes with variable node degrees of 3, 2.8 and 2.6, since it is effective in the order of GF(2), GF(4) and GF(8) performances.
5

Flexible Constraint Length Viterbi Decoders On Large Wire-area Interconnection Topologies

Garga, Ganesh 07 1900 (has links)
To achieve the goal of efficient ”anytime, anywhere” communication, it is essential to develop mobile devices which can efficiently support multiple wireless communication standards. Also, in order to efficiently accommodate the further evolution of these standards, it should be possible to modify/upgrade the operation of the mobile devices without having to recall previously deployed devices. This is achievable if as much functionality of the mobile device as possible is provided through software. A mobile device which fits this description is called a Software Defined Radio (SDR). Reconfigurable hardware-based solutions are an attractive option for realizing SDRs as they can potentially provide a favourable combination of the flexibility of a DSP or a GPP and the efficiency of an ASIC. The work presented in this thesis discusses the development of efficient reconfigurable hardware for one of the most energy-intensive functionalities in the mobile device, namely, Forward Error Correction (FEC). FEC is required in order to achieve reliable transfer of information at minimal transmit power levels. FEC is achieved by encoding the information in a process called channel coding. Previous studies have shown that the FEC unit accounts for around 40% of the total energy consumption of the mobile unit. In addition, modern wireless standards also place the additional requirement of flexibility on the FEC unit. Thus, the FEC unit of the mobile device represents a considerable amount of computing ability that needs to be accommodated into a very small power, area and energy budget. Two channel coding techniques have found widespread use in most modern wireless standards -namely convolutional coding and turbo coding. The Viterbi algorithm is most widely used for decoding convolutionally encoded sequences. It is possible to use this algorithm iteratively in order to decode turbo codes. Hence, this thesis specifically focusses on developing architectures for flexible Viterbi decoders. Chapter 2 provides a description of the Viterbi and turbo decoding techniques. The flexibility requirements placed on the Viterbi decoder by modern standards can be divided into two types -code rate flexibility and constraint length flexibility. The code rate dictates the number of received bits which are handled together as a symbol at the receiver. Hence, code rate flexibility needs to be built into the basic computing units which are used to implement the Viterbi algorithm. The constraint length dictates the number of computations required per received symbol as well as the manner of transfer of results between these computations. Hence, assuming that multiple processing units are used to perform the required computations, supporting constraint length flexibility necessitates changes in the interconnection network connecting the computing units. A constraint length K Viterbi decoder needs 2K−1computations to be performed per received symbol. The results of the computations are exchanged among the computing units in order to prepare for the next received symbol. The communication pattern according to which these results are exchanged forms a graph called a de Bruijn graph, with 2K−1nodes. This implies that providing constraint length flexibility requires being able to realize de Bruijn graphs of various sizes on the interconnection network connecting the processing units. This thesis focusses on providing constraint length flexibility in an efficient manner. Quite clearly, the topology employed for interconnecting the processing units has a huge effect on the efficiency with which multiple constraint lengths can be supported. This thesis aims to explore the usefulness of interconnection topologies similar to the de Bruijn graph, for building constraint length flexible Viterbi decoders. Five different topologies have been considered in this thesis, which can be discussed under two different headings, as done below: De Bruijn network-based architectures The interconnection network that is of chief interest in this thesis is the de Bruijn interconnection network itself, as it is identical to the communication pattern for a Viterbi decoder of a given constraint length. The problem of realizing flexible constraint length Viterbi decoders using a de Bruijn network has been approached in two different ways. The first is an embedding-theoretic approach where the problem of supporting multiple constraint lengths on a de Bruijn network is seen as a problem of embedding smaller sized de Bruijn graphs on a larger de Bruijn graph. Mathematical manipulations are presented to show that this embedding can generally be accomplished with a maximum dilation of, where N is the number of computing nodes in the physical network, while simultaneously avoiding any congestion of the physical links. In this case, however, the mapping of the decoder states onto the processing nodes is assumed fixed. Another scheme is derived based on a variable assignment of decoder states onto computing nodes, which turns out to be more efficient than the embedding-based approach. For this scheme, the maximum number of cycles per stage is found to be limited to 2 irrespective of the maximum contraint length to be supported. In addition, it is also found to be possible to execute multiple smaller decoders in parallel on the physical network, for smaller constraint lengths. Consequently, post logic-synthesis, this architecture is found to be more area-efficient than the architecture based on the embedding theoretic approach. It is also a more efficiently scalable architecture. Alternative architectures There are several interconnection topologies which are closely connected to the de Bruijn graph, and hence could form attractive alternatives for realizing flexbile constraint length Viterbi decoders. We consider two more topologies from this class -namely, the shuffle-exchange network and the flattened butterfly network. The variable state assignment scheme developed for the de Bruijn network is found to be directly applicable to the shuffle-exchange network. The average number of clock cycles per stage is found to be limited to 4 in this case. This is again independent of the constraint length to be supported. On the flattened butterfly (which is actually identical to the hypercube), a state scheduling scheme similar to that of bitonic sorting is used. This architecture is found to offer the ideal throughput of one decoded bit every clock cycle, for any constraint length. For comparison with a more general purpose topology, we consider a flexible constraint length Viterbi decoder architecture based on a 2D-mesh, which is a popular choice for general purpose applications, as well as many signal processing applications. The state scheduling scheme used here is also similar to that used for bitonic sorting on a mesh. All the alternative architectures are capable of executing multiple smaller decoders in parallel on the larger interconnection network. Inferences Following logic synthesis and power estimation, it is found that the de Bruijn network-based architecture with the variable state assignment scheme yields the lowest (area)−(time) product, while the flattened butterfly network-based architecture yields the lowest (area) - (time)2product. This means, that the de Bruijn network-based architecture is the best choice for moderate throughput applications, while the flattened butterfly network-based architecture is the best choice for high throughput applications. However, as the flattened butterfly network is less scalable in terms of size compared to the de Bruijn network, it can be concluded that among the architectures considered in this thesis, the de Bruijn network-based architecture with the variable state assignment scheme is overall an attractive choice for realizing flexible constraint length Viterbi decoders.
6

Comparison Of Decoding Algorithms For Low-density Parity-check Codes

Kolayli, Mert 01 September 2006 (has links) (PDF)
Low-density parity-check (LDPC) codes are a subclass of linear block codes. These codes have parity-check matrices in which the ratio of the non-zero elements to all elements is low. This property is exploited in defining low complexity decoding algorithms. Low-density parity-check codes have good distance properties and error correction capability near Shannon limits. In this thesis, the sum-product and the bit-flip decoding algorithms for low-density parity-check codes are implemented on Intel Pentium M 1,86 GHz processor using the software called MATLAB. Simulations for the two decoding algorithms are made over additive white gaussian noise (AWGN) channel changing the code parameters like the information rate, the blocklength of the code and the column weight of the parity-check matrix. Performance comparison of the two decoding algorithms are made according to these simulation results. As expected, the sum-product algorithm, which is based on soft-decision decoding, outperforms the bit-flip algorithm, which depends on hard-decision decoding. Our simulations show that the performance of LDPC codes improves with increasing blocklength and number of iterations for both decoding algorithms. Since the sum-product algorithm has lower error-floor characteristics, increasing the number of iterations is more effective for the sum-product decoder compared to the bit-flip decoder. By having better BER performance for lower information rates, the bit-flip algorithm performs according to the expectations / however, the performance of the sum-product decoder deteriorates for information rates below 0.5 instead of improving. By irregular construction of LDPC codes, a performance improvement is observed especially for low SNR values.
7

Parallelized Architectures For Low Latency Turbo Structures

Gazi, Orhan 01 January 2007 (has links) (PDF)
In this thesis, we present low latency general concatenated code structures suitable for parallel processing. We propose parallel decodable serially concatenated codes (PDSCCs) which is a general structure to construct many variants of serially concatenated codes. Using this most general structure we derive parallel decodable serially concatenated convolutional codes (PDSCCCs). Convolutional product codes which are instances of PDSCCCs are studied in detail. PDSCCCs have much less decoding latency and show almost the same performance compared to classical serially concatenated convolutional codes. Using the same idea, we propose parallel decodable turbo codes (PDTCs) which represent a general structure to construct parallel concatenated codes. PDTCs have much less latency compared to classical turbo codes and they both achieve similar performance. We extend the approach proposed for the construction of parallel decodable concatenated codes to trellis coded modulation, turbo channel equalization, and space time trellis codes and show that low latency systems can be constructed using the same idea. Parallel decoding operation introduces new problems in implementation. One such problem is memory collision which occurs when multiple decoder units attempt accessing the same memory device. We propose novel interleaver structures which prevent the memory collision problem while achieving performance close to other interleavers.
8

Exploiting Hidden Resources to Design Collision-Embracing Protocols for Emerging Wireless Networks

Das, Tanmoy January 2019 (has links)
No description available.
9

Etude de la pertinence des paramètres stochastiques sur des modèles de Markov cachés / Study of the relevance of stochastic parameters on hidden Markov models

Robles, Bernard 18 December 2013 (has links)
Le point de départ de ce travail est la thèse réalisée par Pascal Vrignat sur la modélisation de niveaux de dégradation d’un système dynamique à l’aide de Modèles de Markov Cachés (MMC), pour une application en maintenance industrielle. Quatre niveaux ont été définis : S1 pour un arrêt de production et S2 à S4 pour des dégradations graduelles. Recueillant un certain nombre d’observations sur le terrain dans divers entreprises de la région, nous avons réalisé un modèle de synthèse à base de MMC afin de simuler les différents niveaux de dégradation d’un système réel. Dans un premier temps, nous identifions la pertinence des différentes observations ou symboles utilisés dans la modélisation d’un processus industriel. Nous introduisons ainsi le filtre entropique. Ensuite, dans un but d’amélioration du modèle, nous essayons de répondre aux questions : Quel est l’échantillonnage le plus pertinent et combien de symboles sont ils nécessaires pour évaluer au mieux le modèle ? Nous étudions ensuite les caractéristiques de plusieurs modélisations possibles d’un processus industriel afin d’en déduire la meilleure architecture. Nous utilisons des critères de test comme les critères de l’entropie de Shannon, d’Akaike ainsi que des tests statistiques. Enfin, nous confrontons les résultats issus du modèle de synthèse avec ceux issus d’applications industrielles. Nous proposons un réajustement du modèle pour être plus proche de la réalité de terrain. / As part of preventive maintenance, many companies are trying to improve the decision support of their experts. This thesis aims to assist our industrial partners in improving their maintenance operations (production of pastries, aluminum smelter and glass manufacturing plant). To model industrial processes, different topologies of Hidden Markov Models have been used, with a view to finding the best topology by studying the relevance of the model outputs (also called signatures). This thesis should make it possible to select a model framework (a framework includes : a topology, a learning & decoding algorithm and a distribution) by assessing the signature given by different synthetic models. To evaluate this « signature », the following widely-used criteria have been applied : Shannon Entropy, Maximum likelihood, Akaike Information Criterion, Bayesian Information Criterion and Statistical tests.
10

A Modified Sum-Product Algorithm over Graphs with Short Cycles

Raveendran, Nithin January 2015 (has links) (PDF)
We investigate into the limitations of the sum-product algorithm for binary low density parity check (LDPC) codes having isolated short cycles. Independence assumption among messages passed, assumed reasonable in all configurations of graphs, fails the most in graphical structures with short cycles. This research work is a step forward towards understanding the effect of short cycles on error floors of the sum-product algorithm. We propose a modified sum-product algorithm by considering the statistical dependency of the messages passed in a cycle of length 4. We also formulate a modified algorithm in the log domain which eliminates the numerical instability and precision issues associated with the probability domain. Simulation results show a signal to noise ratio (SNR) improvement for the modified sum-product algorithm compared to the original algorithm. This suggests that dependency among messages improves the decisions and successfully mitigates the effects of length-4 cycles in the Tanner graph. The improvement is significant at high SNR region, suggesting a possible cause to the error floor effects on such graphs. Using density evolution techniques, we analysed the modified decoding algorithm. The threshold computed for the modified algorithm is higher than the threshold computed for the sum-product algorithm, validating the observed simulation results. We also prove that the conditional entropy of a codeword given the estimate obtained using the modified algorithm is lower compared to using the original sum-product algorithm.

Page generated in 0.1051 seconds