• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 233
  • 45
  • 42
  • 10
  • 10
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 6
  • 6
  • 4
  • 3
  • Tagged with
  • 433
  • 408
  • 388
  • 322
  • 318
  • 99
  • 74
  • 64
  • 60
  • 51
  • 47
  • 45
  • 45
  • 41
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Error control with constrained codes

04 February 2014 (has links)
M.Ing.(Electrical and Electronic Engineering) / In the ideal communication system no noise is present and no errors will be made. However, in practice, communication is over noisy channels which cause errors in the information. There is thus a necessity for the control of these errors. Furthermore, several channels impose runlength or disparity constraints on the bit stream. Until recently, the error control on these channels was applied separately to imposing the input restrictions with constrained codes. Such a system leads to poor performance under certain conditions. and is more complex and expensive to apply than systems where the error control is an integral part of the constrained code or decoder. In this study, we firstly investigate the error multiplication phenomena of constrained codes. An algorithm is presented that minimizes the error propagation probabilities of memoryless decoders according to two criteria. Another algorithm is presented along with the first to calculate the resulting bit error probabilities. The second approach to the error control of constrained codes is the construction of combined error-correcting constrained finite-state machine codes. We investigate the known construction techniques and construct several new codes using extensions of the known techniques. These codes complement or improve on the known error-correcting constrained codes with regards to either complexity, rate or error-correcting capability. Furthermore, these codes have good error behaviour and favourable power spectral densities.
32

Coding structure and properties for correcting insertion/deletion errors

08 August 2012 (has links)
D. Ing. / The digital transmission of information necessitates the compensation for disturbances introduced by the channel. The compensation method usually used in digital communications is error correcting coding. The errors usually encountered are additive in nature, i.e. errors where only symbol values are changed. Understandably, the field of additive error correcting codes has become a mature research field. Remarkable progress has been made during the past 50 years, to such an extent that near Shannon capacity can be reached using suitable coding techniques. Sometimes the channel disturbances may result in the loss and/or gain of symbols and a subsequent loss of word or frame synchronisation. Unless some precautions were made, a synchronisation error may propagate and corrupt large blocks of data. Typical precautions taken against synchronisation errors are: out-of-band clock signals distributed to the transmission equipment in a network; stringent requirements on clock stability and jitter; limits on the number of repeaters and regeneration to curb jitter and delays; line coding to facilitate better clock extraction; and - use of framing methods on the coding level. Most transmission systems in use today will stop data transmission until reliable synchronisation is restored. El multiplexing systems are still the predominantly used technology in fixed telephone line operators and GSM operators, and recovering from a loss of synchronisation (the FAS alarm) typically lasts approximately 10 seconds. Considering that the transmission speed is 2048 KB/s, a large quantity of data is lost in during this process. The purpose of this study is therefore to broaden the understanding of insertion/deletion correcting binary codes. This will be achieved by presenting new properties and coding techniques for multiple insertion/deletion correcting codes. Mostly binary codes will be considered, but in some instances, the results may also hold for non-binary codes. As a secondary purpose, we hope to generate interest in this field of study and enable other researchers to continue to deeper explore the mechanisms of insertion and/or deletion correcting codes.
33

Investigation of the use of infinite impulse response filters to construct linear block codes

Chandran, Aneesh January 2016 (has links)
A dissertation submitted in ful lment of the requirements for the degree of Masters in Science in the Information Engineering School of Electrical and Information Engineering August 2016 / The work presented extends and contributes to research in error-control coding and information theory. The work focuses on the construction of block codes using an IIR lter structure. Although previous works in this area uses FIR lter structures for error-detection, it was inherently used in conjunction with other error-control codes, there has not been an investigation into using IIR lter structures to create codewords, let alone to justify its validity. In the research presented, linear block codes are created using IIR lters, and the error-correcting capabilities are investigated. The construction of short codes that achieve the Griesmer bound are shown. The potential to construct long codes are discussed and how the construction is constrained due to high computational complexity is shown. The G-matrices for these codes are also obtained from a computer search, which is shown to not have a Quasi-Cyclic structure, and these codewords have been tested to show that they are not cyclic. Further analysis has shown that IIR lter structures implements truncated cyclic codes, which are shown to be implementable using an FIR lter. The research also shows that the codewords created from IIR lter structures are valid by decoding using an existing iterative soft-decision decoder. This represents a unique and valuable contribution to the eld of error-control coding and information theory. / MT2017
34

A system on chip based error detection and correction implementation for nanosatellites

Hillier, Caleb Pedro January 2018 (has links)
Thesis (Master of Engineering in Electrical Engineering)--Cape Peninsula University of Technology, 2018. / This thesis will focus on preventing and overcoming the effects of radiation in RAM on board the ZA cube 2 nanosatellite. The main objective is to design, implement and test an effective error detection and correction (EDAC) system for nanosatellite applications using a SoC development board. By conducting an in-depth literature review, all aspects of single-event effects are investigated, from space radiation right up to the implementation of an EDAC system. During this study, Hamming code was identified as a suitable EDAC scheme for the prevention of single-event effects. During the course of this thesis, a detailed radiation study of ZA cube 2’s space environment is conducted. This provides insight into the environment to which the satellite will be exposed to during orbit. It also provides insight which will allow accurate testing should accelerator tests with protons and heavy ions be necessary. In order to understand space radiation, a radiation study using ZA cube 2’s orbital parameters was conducted using OMERE and TRIM software. This study included earth’s radiation belts, galactic cosmic radiation, solar particle events and shielding. The results confirm that there is a need for mitigation techniques that are capable of EDAC. A detailed look at different EDAC schemes, together with a code comparison study was conducted. There are two types of error correction codes, namely error detection codes and error correction codes. For protection against radiation, nanosatellites use error correction codes like Hamming, Hadamard, Repetition, Four Dimensional Parity, Golay, BCH and Reed Solomon codes. Using detection capabilities, correction capabilities, code rate and bit overhead each EDAC scheme is evaluated and compared. This study provides the reader with a good understanding of all common EDAC schemes. The field of nanosatellites is constantly evolving and growing at a very fast speed. This creates a growing demand for more advanced and reliable EDAC systems that are capable of protecting all memory aspects of satellites. Hamming codes are extensively studied and implemented using different approaches, languages and software. After testing three variations of Hamming codes, in both Matlab and VHDL, the final and most effective version was Hamming [16, 11, 4]2. This code guarantees single error correction and double error detection. All developed Hamming codes are suited for FPGA implementation, for which they are tested thoroughly using simulation software and optimised.
35

Decoding of two dimensional symbologies on uneven surfaces.

January 2002 (has links)
by Tse, Yan Tung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 75-76). / Abstracts in English and Chinese. / Abstract --- p.i / 摘要 --- p.i / Acknowledgements --- p.ii / Table of Contents --- p.iii / List of Figures --- p.vi / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Types of 2D Barcodes --- p.3 / Chapter 1.2 --- Reading 2D Barcodes --- p.5 / Chapter 1.3 --- Thesis Organization --- p.8 / Chapter Chapter 2 --- Related Works --- p.9 / Chapter 2.1 --- DataMatrix --- p.9 / Chapter 2.2 --- Original MaxiCode --- p.11 / Chapter 2.3 --- Spatial Methods for MaxiCode --- p.12 / Chapter 2.4 --- Summary --- p.14 / Chapter Chapter 3 --- Reading 2D Barcode on Uneven Surfaces --- p.15 / Chapter 3.1 --- The Image Processing Framework --- p.15 / Chapter 3.2 --- The Scanning Environment --- p.17 / Chapter 3.3 --- Perspective Transform --- p.20 / Chapter Chapter 4 --- Uneven Surface Models --- p.23 / Chapter 4.1 --- Cylindrical Surfaces --- p.24 / Chapter 4.2 --- General Uneven Surfaces --- p.26 / Chapter Chapter 5 --- """Patch-wise"" Barcode Reading" --- p.28 / Chapter 5.1 --- The Inputs --- p.28 / Chapter 5.2 --- The Registration Process --- p.29 / Chapter 5.3 --- Patch Cutting --- p.33 / Chapter Chapter 6 --- Registering Cells in a Patch --- p.37 / Chapter 6.1 --- Document Skew Detection: Projection Profiles --- p.38 / Chapter 6.2 --- Radon Transform Based Orientation Detection --- p.41 / Chapter 6.3 --- Identifying Row/column Boundaries --- p.45 / Chapter 6.4 --- Detecting Cell Width --- p.50 / Chapter 6.5 --- Calculating Transform Parameters --- p.53 / Chapter Chapter 7 --- Patch Registration --- p.57 / Chapter 7.1 --- Matching Adjacent patches --- p.57 / Chapter 7.2 --- Expanding to the Entire Code Area --- p.60 / Chapter Chapter 8 --- Simulation and Results --- p.61 / Chapter 8.1 --- Implementation Details --- p.61 / Chapter 8.2 --- Comparison Methods --- p.63 / Chapter 8.3 --- Results --- p.63 / Chapter 8.4 --- Computation Costs --- p.68 / Chapter Chapter 9 --- Conclusion and Further Works --- p.73 / Bibliography --- p.75
36

Coherent network error correction. / 網絡編碼與糾錯 / CUHK electronic theses & dissertations collection / Wang luo bian ma yu jiu cuo

January 2008 (has links)
Based on the weight properties of network codes, we present the refined versions of the Hamming bound, the Singleton bound and the Gilbert-Varshamov bound for linear network codes. We give two different algorithms to construct network codes with minimum distance constraints, both of which can achieve the refined Singleton bound. The first algorithm finds a codebook based on a given set of local encoding kernels defining a linear network code. The second algorithm finds a set of of local encoding kernels based on a given classical error-correcting code satisfying a certain minimum distance requirement. / First, the error correction/detection correction capabilities of a network code is completely characterized by a parameter which is equivalent to the minimum Hamming distance when the network code is linear and the weight measure on the error vectors is the Hamming weight. Our results imply that for a linear network code with the Hamming weight being the weight measure on the error vectors, the capability of the code is fully characterized by a single minimum distance. By contrast, for a nonlinear network code, two different minimum distances are needed for characterizing the capabilities of the code for error correction and for error detection. This leads to the surprising discovery that for a nonlinear network code, the number of correctable errors can be more than half of the number of detectable errors. (For classical algebraic codes, the number of correctable errors is always the largest integer not greater than half of the number of detectable errors.) / Network error correction provides a new method to correct errors in network communications by extending the strength of classical error-correcting codes from a point-to-point model to networks. This thesis considers a number of fundamental problems in coherent network error correction. / We further define equivalence classes of weight measures with respect to a general channel model. Specifically, for any given channel, the minimum weight decoders for two different weight measures are equivalent if the two weight measures belong to the same equivalence class. In the special case of network coding, we study four weight measures and show that all the four weight measures are in the same equivalent class for linear network codes. Hence they are all equivalent for error correction and detection when applying minimum weight decoding. / Yang, Shenghao. / Adviser: Raymond W.H. Yeung. / Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3708. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 89-93). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
37

A simple RLS-POCS solution for reduced complexity ADSL impulse shortening

Helms, Sheldon J. 03 September 1999 (has links)
Recently, with the realization of the World Wide Web, the tremendous need for high-speed data communications has grown. Several access techniques have been proposed which utilize the existing copper twisted pair cabling. Of these, the xDSL family, particularly ADSL and VDSL, have shown great promise in providing broadband or near-broadband access through the common telephone lines. A critical component of the ADSL and VDSL systems is the guard band needed to eliminate the interference caused by the previously transmitted blocks. This guard band must come in the form of redundant samples at the start of every transmit block, and be at least as long as the channel impulse response. Since the required guard band length is much greater than the length of the actual transmitted samples, techniques to shorten the channel impulse response must be considered. In this thesis, a new algorithm based on the RLS error minimization and POCS optimization techniques will be applied to the channel impulse-shortening problem in an ADSL environment. As will be shown, the proposed algorithm will provide a much better solution with a minimal increase in complexity as compared to the existing LMS techniques. / Graduation date: 2000
38

Classification context in a machine learning approach to predicting protein secondary structure

Langford, Bill T. 13 May 1993 (has links)
An important problem in molecular biology is to predict the secondary structure of proteins from their primary structure. The primary structure of a protein is the sequence of amino acid residues. The secondary structure is an abstract description of the shape of the folded protein, with regions identified as alpha helix, beta strands, and random coil. Existing methods of secondary structure prediction examine a short segment of the primary structure and predict the secondary structure class (alpha, beta, coil) of an individual residue centered in that segment. The last few years of research have failed to improve these methods beyond the level of 65% correct predictions. This thesis investigates whether these methods can be improved by permitting them to examine externally-supplied predictions for the secondary structure of other residues in the segment. The externally-supplied predictions are called the "classification context," because they provide contextual information about the secondary structure classifications of neighboring residues. The classification context could be provided by an existing algorithm that made initial secondary structure predictions, and then these could be taken as input by a second algorithm that would attempt to improve the predictions. A series of experiments on both real and simulated classification context were performed to measure the possible improvement that could be obtained from classification context. The results showed that the classification context provided by current algorithms does not yield improved performance when used as input by those same algorithms. However, if the classification context is generated by randomly damaging the correct classifications, substantial performance improvements are possible. Even small amounts of randomly damaged correct context improves performance. / Graduation date: 1994
39

Decentralized Coding in Unreliable Communication Networks

Lin, Yunfeng 30 August 2010 (has links)
Many modern communication networks suffer significantly from the unreliable characteristic of their nodes and links. To deal with failures, traditionally, centralized erasure codes have been extensively used to improve reliability by introducing data redundancy. In this thesis, we address several issues in implementing erasure codes in a decentralized way such that coding operations are spread to multiple nodes. Our solutions are based on fountain codes and randomized network coding, because of their capability of being amenable to decentralized implementation originated from their simplicity and randomization properties. Our contributions consist of four parts. First, we propose a novel decentralized implementation of fountain codes utilizing random walks. Our solution does not require node location information and enjoys a small local routing table with a size in proportion to the number of neighbors. Second, we introduce priority random linear codes to achieve partial data recovery by partition and encoding data into non-overlapping or overlapping subsets. Third, we present geometric random linear codes to decrease communication costs in decoding significantly, by introducing modest data redundancy in a hierarchical fashion. Finally, we study the application of network coding in disruption tolerant networks. We show that network coding achieves shorter data transmission time than replication, especially when data buffers are limited. We also propose an efficient variant of network coding based protocol, which attains similar transmission delay, but with much lower transmission costs, as compared to a protocol based on epidemic routing.
40

Decentralized Coding in Unreliable Communication Networks

Lin, Yunfeng 30 August 2010 (has links)
Many modern communication networks suffer significantly from the unreliable characteristic of their nodes and links. To deal with failures, traditionally, centralized erasure codes have been extensively used to improve reliability by introducing data redundancy. In this thesis, we address several issues in implementing erasure codes in a decentralized way such that coding operations are spread to multiple nodes. Our solutions are based on fountain codes and randomized network coding, because of their capability of being amenable to decentralized implementation originated from their simplicity and randomization properties. Our contributions consist of four parts. First, we propose a novel decentralized implementation of fountain codes utilizing random walks. Our solution does not require node location information and enjoys a small local routing table with a size in proportion to the number of neighbors. Second, we introduce priority random linear codes to achieve partial data recovery by partition and encoding data into non-overlapping or overlapping subsets. Third, we present geometric random linear codes to decrease communication costs in decoding significantly, by introducing modest data redundancy in a hierarchical fashion. Finally, we study the application of network coding in disruption tolerant networks. We show that network coding achieves shorter data transmission time than replication, especially when data buffers are limited. We also propose an efficient variant of network coding based protocol, which attains similar transmission delay, but with much lower transmission costs, as compared to a protocol based on epidemic routing.

Page generated in 0.0912 seconds