• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 42
  • 38
  • 10
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 6
  • 4
  • 4
  • 3
  • 1
  • Tagged with
  • 381
  • 381
  • 381
  • 321
  • 316
  • 97
  • 73
  • 60
  • 57
  • 48
  • 44
  • 44
  • 44
  • 40
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Decodificadores de baixa complexidade para códigos LDPC Q-ários / Low complexity decoders for Q-ary LDPC codes

Santos, Lailson Ferreira dos, 1990- 26 August 2018 (has links)
Orientadores: Jaime Portugheis, Celso de Almeida / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-26T15:56:15Z (GMT). No. of bitstreams: 1 Santos_LailsonFerreirados_M.pdf: 1842807 bytes, checksum: dc8a5492d7e9a3e4224974176413f42c (MD5) Previous issue date: 2014 / Resumo: Esta dissertação analisa decodificadores LDPC (do inglês, Low-Density Parity-Check) sobre GF(q) de baixa complexidade num canal AWGN de entrada binária. É realizada uma revisão bibliográfica dos algoritmos binários baseados na técnica bit-flipping e seus desempenhos são comparados. As principais contribuições deste trabalho estão relacionadas com a investigação do algoritmo de decodificação WSF (do inglês, Weighted Symbol-Flipping} para códigos LDPC não binários. O algoritmo WSF é composto por duas partes: função de troca e regra de seleção do novo símbolo candidato. Primeiramente, é demonstrado que a regra de seleção do novo símbolo candidato baseada nos valores absolutos das saídas do canal observado, é equivalente a uma baseada em distâncias euclidianas. Também é verificado que a variação do valor do fator peso do algoritmo WSF sem o mecanismo de detecção de laços infinitos, não influencia no desempenho do decodificador, podendo ser ignorado. E por final, é proposto um algoritmo SF (do inglês, Symbol-Flipping) para códigos LDPC não binários, sendo que a função de troca é baseada apenas nos valores inteiros das síndromes e troca múltiplos bits em paralelo. O algoritmo SF obteve um melhor desempenho do que WSF para ordem do campo de Galois grandes / Abstract: This dissertation analyzes low complexity decoding algorithms for low-density parity-check (LDPC) codes over GF(q) in a binary input AWGN (BI-AWGN) channel. A literature review of binary algorithms based on bit-flipping techniques is presented and their performances are compared. The main contributions of this dissertation is associated with the investigation about weighted symbol-flipping (SF) algorithm for nonbinary LDPC codes. The weighted SF algorithm has two main parts: the symbol-flipping function and the new candidate symbol rule. First, it is demonstrated that a rule for choosing the new candidate symbol based on absolute values of observed channel outputs is equivalent to a rule based on Euclidean distances. Then, it is verified that the weighting factor of flipping function has negligible impact on algorithm performance. Motivated by this fact, a SF decoding algorithm is proposed whose flipping function requires only syndrome values and flips symbols in parallel. It is observed that SF decoding outperforms WSF for q-ary codes with large q / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
162

Error control techniques for the compound charnel.

Dmuchalsky, Theodore John. January 1971 (has links)
No description available.
163

Performance of Recursive Maximum Likelihood Turbo Decoding

Krishnamurthi, Sumitha 03 December 2003 (has links)
No description available.
164

An Introduction to S(5,8,24)

Beane, Maria Elizabeth 01 June 2011 (has links)
S(5,8,24) is one of the largest known Steiner systems and connects combinatorial designs, error-correcting codes, finite simple groups, and sphere packings in a truly remarkable way. This thesis discusses the underlying structure of S(5,8,24), its construction via the (24,12) Golay code, as well its automorphism group, which is the Mathieu group M24, a member of the sporadic simple groups. Particular attention is paid to the calculation of the size of automorphism groups of Steiner systems using the Orbit-Stabilizer Theorem. We conclude with a section on the sphere packing problem and elaborate on how the 8-sets of S(5,8,24) can be used to form Leech's Lattice, which Leech used to create the densest known sphere packing in 24-dimensions. The appendix contains code written for Matlab which has the ability to construct the octads of S(5,8,24), permute the elements to obtain isomorphic S(5,8,24) systems, and search for certain subsets of elements within the octads. / Master of Science
165

Performance of error correcting codes with random and burst errors

Asimopoulos, Nikos January 1983 (has links)
The errors that can occur in a computer system during data reading or recording in the memory can be of different types depending upon the memory organization. They can be random bit errors or burst errors. Therefore, if high reliability is required, the use of an error correcting technique that will be able to handle both types of errors is necessary. In this study the capability of some classes of error correcting codes is analysed and their performance with both types of errors is tested. Reed Solomon and concatenated codes are examined in more detail because they are known to be among the best classes of codes. In order to evaluate the performance of these codes two well known classes of codes are used: BCH codes and Fire codes. The performance of all the codes with regard to random error correction is analysed using a binary symmetric channel model. BCH codes are shown to be more powerful for average codeword length, but as the codeword length increases RS and concatenated codes perform better than BC3 codes of the same rate of transmission. A new model for systems with burst errors is introduced with which a large variety of real systems can be simulated by choosing the appropriate distributions of burst errors. The performance of all these codes at correcting burst errors is simulated using this model. It is shown that RS codes and concatenated codes are very powerful with burst errors and can increase significantly the reliability of a signaling system incorporating these types of errors. An advantage of RS codes compared to concatenated codes is that they can be very easily implemented and can be employed efficiently for systems with any codeword length. Concatenated codes can perform better than RS codes only when very long codewords are required. / M.S.
166

Optimum implementation of BCH codes

Kumar, G. A. January 1983 (has links)
The Bose-Chaudhuri-Hocquenghem (BCH) codes are best constructive codes for channels in which error affect successive symbols independently. The binary BCH codes, a subclass of BCH codes, are known to have good random error correcting capability and Reed-Solomon (RS) codes, an important subclass of BCH codes, have very good burst error correcting capability. A concatenation of these two codes, the binary BCH/RS concatenated codes, can correct both random and burst errors. The decoding procedure for these codes is well documented. However not much work has been done on the implementation of the decoding procedure. This thesis deals with development of configurations for decoding binary BCH codes, RS codes and BCH/RS concatenated codes. The decoding procedure is first described. Sample calculations are shown to explain the decoding procedure. The decoding procedure consists of (1) 3 major steps for binary BCH codes and (2) 4 major steps for RS codes. Each of these steps can be implemented by either hardware or software, but the efficiency varies between the specific steps of the de- coding procedure. For each step, both hardware and software implementations are discussed. The complexity and decoding delay for both methods of implementation are determined. The optimal combination, which offers fast execution time and overall system simplicity, is presented. A new procedure for designing BCH/RS concatenated codes is developed and presented in Chapter VI. The advantages of this new procedure are also discussed in Chapter VI. / M.S.
167

Hierarchical error correcting cassette file system

Siggia, Alan Dale. January 1977 (has links)
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1977 / Includes bibliographical references. / by Alan Dale Siggia. / B.S. / B.S. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
168

Development of system for teaching turbo code forward error correction techniques

Shi, Shuai January 2007 (has links)
Thesis (M.Tech.: Electronic Engineering)-Dept. of Electronic Engineering, Durban University of Technology, 2007. 1 v. (various pagings) / The objective was to develop a turbo code demonstration system for educational use. The aim was to build a system that would execute rapidly and produce a graphical display exemplifying the power of turbo codes and showing the effects of parameter variation.
169

Reed-Muller codes in error correction in wireless adhoc networks

Tezeren, Serdar U. 03 1900 (has links)
Approved for public release; distribution is unlimited / The IEEE 802.11a standard uses a coded orthogonal frequency division multi-plexing (COFDM) scheme in the 5-GHz band to support data rates up to 54 Mbps. The COFDM was chosen because of its robustness to multipath fading affects. In the stan-dard, convolutional codes are used for error correction. This thesis examines the perform-ance of the COFDM system with variable rate Reed-Muller (RM) error correction codes with a goal to reduce the peak-to-average power ratio (PAPR). Contrary to the expecta-tions, RM codes did not provide expected improvement in PAPR reduction. Peak clip-ping and Hanning windowing techniques were investigated in order to reduce the PAPR. The results indicate that a tradeoff exists between the PAPR and the bit-error rate (BER) performance. Although peak clipping yielded considerable reduction in PAPR, it required high signal-to-noise ratios. On the other hand, Hanning windowing provided only a small reduction in PAPR with reasonable BER performance. / Lieutenant Junior Grade, Turkish Navy
170

Correcting bursts of adjacent deletions by adapting product codes

25 March 2015 (has links)
M.Ing. (Electrical and Electronic Engineering) / In this study, the problem of correcting burst of adjacent deletions by adapting product codes was investigated. The first step in any digital transmission is to establish synchronization between the sending and receiving nodes. This initial synchronization ensures that the receiver samples the information bits at the correct interval. Unfortunately synchronization is not guaranteed to last for the entire duration of data transmission. Though synchronization errors rarely occur, it has disastrous effects at the receiving end of transmission. These synchronization errors are modelled as either insertions or deletions in the transmitted data. In the best case scenario, these errors are restricted to single bit errors. In the worst case scenario, these errors lead to bursts of bits being incorrect. If these synchronization errors are not detected and corrected, it can cause a shift in the transmitted sequence which in turn leads to loss of synchronization. When a signal is subjected to synchronization errors it is difficult accurately recover the original data signal. In addition to the loss of synchronization, the information transmitted over the channel is also subjected to noise. This noise in the channel causes inversion errors within the signal. The objective of this dissertation is to investigate if an error correction scheme can be designed that has the ability to detect and correct adjacent bursts of deletions and random inversion errors. This error correction scheme needed to make use of a product code matrix structure. This product matrix needed to incorporate both an error correction and synchronization technique. The chosen error correcting techniques were Hamming and Reed-Solomon codes. The chosen synchronization techniques for this project were the marker technique or an adaptation of the Hamming code technique. In order to find an effective model, combinations of these models were simulated and compared. From the research obtained and analyzed in this document it was found that, depending on the desired performance, complexity and code rate, an error correction scheme can be used in the efficient correction of bursts of adjacent deletions by adapting product codes.

Page generated in 0.0981 seconds