Spelling suggestions: "subject:"error correction modes""
1 |
Adaptable and enhanced error correction codes for efficient error and defect tolerance in memoriesDatta, Rudrajit 31 January 2012 (has links)
Ongoing technology improvements and feature size reduction have led to an increase in manufacturing-induced parameter variations. These variations affect various memory cell circuits, making them unreliable at low voltages. Memories are very dense structures that are especially susceptible to defects, and more so at lower voltages. Transient errors due to radiation, power supply noise, etc., can also cause bit-flips in a memory. To protect the data integrity of the memory, an error correcting code (ECC) is generally employed. Present ECC, however, is either single error correcting or corrects multiple errors at the cost of high redundancy or longer correction time.
This research addresses the problem of memory reliability under adverse conditions. The goal is to achieve a desired reliability at reduced redundancy while also keeping in check the correction time. Several methods are proposed here including one that makes use of leftover spare columns/rows in memory arrays [Datta 09] and another one that uses memory characterization tests to customize ECC on a chip by chip basis [Datta 10]. The former demonstrates how reusing spare columns leftover from the memory repair process can help increase code reliability while keeping hardware overhead to a minimum. In the latter case customizing ECCs on a chip by chip basis shows considerable reduction in check bit overhead, at the same time providing a desired level of protection for low voltage operations. The customization is done with help from a defect map generated at manufacturing time, which helps identify potentially vulnerable cells at low voltage.
An ECC based solution for tackling the wear out problem of phase change memories (PCM) has also been presented here. To handle the problem of gradual wear out and hence increasing defect rates in PCM systems an adaptive error correction scheme is proposed [Datta 11a]. The adaptive scheme, implemented alongside the operating system seeks to increase PCM lifetime by manifold times. Finally the work on memory ECC is extended by proposing a fast burst error correcting code with minimal overhead for handling scenarios where multi-bit failures are common [Datta 11b]. The twofold goal of this work – design a low-cost code capable of handling multi bit errors affecting adjacent cells, and fast multi bit error correction – is achieved by modifying conventional Orthogonal Latin Square codes into burst error codes. / text
|
2 |
Quantum convolutional stabilizer codesChinthamani, Neelima 30 September 2004 (has links)
Quantum error correction codes were introduced as a means to protect quantum information from decoherance and operational errors. Based on their approach to error control, error correcting codes can be divided into two different classes: block codes and convolutional codes. There has been significant development towards finding quantum block codes, since they were first discovered in 1995. In contrast, quantum convolutional codes remained mainly uninvestigated. In this thesis, we develop the stabilizer formalism for quantum convolutional codes. We define distance properties of these codes and give a general method for constructing encoding circuits, given a set of generators of the stabilizer of a quantum convolutional stabilizer code, is shown. The resulting encoding circuit enables online encoding of the qubits, i.e., the encoder does not have to wait for the input transmission to end before starting the encoding process. We develop the quantum analogue of the Viterbi algorithm. The quantum Viterbi algorithm (QVA) is a maximum likehood error estimation algorithm, the complexity of which grows linearly with the number of encoded qubits. A variation of the quantum Viterbi algorithm, the Windowed QVA, is also discussed. Using Windowed QVA, we can estimate the most likely error without waiting for the entire received sequence.
|
3 |
Quantum convolutional stabilizer codesChinthamani, Neelima 30 September 2004 (has links)
Quantum error correction codes were introduced as a means to protect quantum information from decoherance and operational errors. Based on their approach to error control, error correcting codes can be divided into two different classes: block codes and convolutional codes. There has been significant development towards finding quantum block codes, since they were first discovered in 1995. In contrast, quantum convolutional codes remained mainly uninvestigated. In this thesis, we develop the stabilizer formalism for quantum convolutional codes. We define distance properties of these codes and give a general method for constructing encoding circuits, given a set of generators of the stabilizer of a quantum convolutional stabilizer code, is shown. The resulting encoding circuit enables online encoding of the qubits, i.e., the encoder does not have to wait for the input transmission to end before starting the encoding process. We develop the quantum analogue of the Viterbi algorithm. The quantum Viterbi algorithm (QVA) is a maximum likehood error estimation algorithm, the complexity of which grows linearly with the number of encoded qubits. A variation of the quantum Viterbi algorithm, the Windowed QVA, is also discussed. Using Windowed QVA, we can estimate the most likely error without waiting for the entire received sequence.
|
4 |
High-Performance Decoder Architectures For Low-Density Parity-Check CodesZhang, Kai 09 January 2012 (has links)
The Low-Density Parity-Check (LDPC) codes, which were invented by Gallager back in 1960s, have attracted considerable attentions recently. Compared with other error correction codes, LDPC codes are well suited for wireless, optical, and magnetic recording systems due to their near- Shannon-limit error-correcting capacity, high intrinsic parallelism and high-throughput potentials. With these remarkable characteristics, LDPC codes have been adopted in several recent communication standards such as 802.11n (Wi-Fi), 802.16e (WiMax), 802.15.3c (WPAN), DVB-S2 and CMMB. This dissertation is devoted to exploring efficient VLSI architectures for high-performance LDPC decoders and LDPC-like detectors in sparse inter-symbol interference (ISI) channels. The performance of an LDPC decoder is mainly evaluated by area efficiency, error-correcting capability, throughput and rate flexibility. With this work we investigate tradeoffs between the four performance aspects and develop several decoder architectures to improve one or several performance aspects while maintaining acceptable values for other aspects. Firstly, we present a high-throughput decoder design for the Quasi-Cyclic (QC) LDPC codes. Two new techniques are proposed for the first time, including parallel layered decoding architecture (PLDA) and critical path splitting. Parallel layered decoding architecture enables parallel processing for all layers by establishing dedicated message passing paths among them. The decoder avoids crossbar-based large interconnect network. Critical path splitting technique is based on articulate adjustment of the starting point of each layer to maximize the time intervals between adjacent layers, such that the critical path delay can be split into pipeline stages. Furthermore, min-sum and loosely coupled algorithms are employed for area efficiency. As a case study, a rate-1/2 2304-bit irregular LDPC decoder is implemented using ASIC design in 90 nm CMOS process. The decoder can achieve an input throughput of 1.1 Gbps, that is, 3 or 4 times improvement over state-of-art LDPC decoders, while maintaining a comparable chip size of 2.9 mm^2. Secondly, we present a high-throughput decoder architecture for rate-compatible (RC) LDPC codes which supports arbitrary code rates between the rate of mother code and 1. While the original PLDA is lack of rate flexibility, the problem is solved gracefully by incorporating the puncturing scheme. Simulation results show that our selected puncturing scheme only introduces the BER performance degradation of less than 0.2dB, compared with the dedicated codes for different rates specified in the IEEE 802.16e (WiMax) standard. Subsequently, PLDA is employed for high throughput decoder design. As a case study, a RC- LDPC decoder based on the rate-1/2 WiMax LDPC code is implemented in CMOS 90 nm process. The decoder can achieve an input throughput of 975 Mbps and supports any rate between 1/2 and 1. Thirdly, we develop a low-complexity VLSI architecture and implementation for LDPC decoder used in China Multimedia Mobile Broadcasting (CMMB) systems. An area-efficient layered decoding architecture based on min-sum algorithm is incorporated in the design. A novel split-memory architecture is developed to efficiently handle the weight-2 submatrices that are rarely seen in conventional LDPC decoders. In addition, the check-node processing unit is highly optimized to minimize complexity and computing latency while facilitating a reconfigurable decoding core. Finally, we propose an LDPC-decoder-like channel detector for sparse ISI channels using belief propagation (BP). The BP-based detection computationally depends on the number of nonzero interferers only and are thus more suited for sparse ISI channels which are characterized by long delay but a small fraction of nonzero interferers. Layered decoding algorithm, which is popular in LDPC decoding, is also adopted in this paper. Simulation results show that the layered decoding doubles the convergence speed of the iterative belief propagation process. Exploring the special structure of the connections between the check nodes and the variable nodes on the factor graph, we propose an effective detector architecture for generic sparse ISI channels to facilitate the practical application of the proposed detection algorithm. The proposed architecture is also reconfigurable in order to switch flexible connections on the factor graph in the time-varying ISI channels.
|
5 |
LOW DENSITY PARITY CHECK CODES FOR TELEMETRY APPLICATIONSHayes, Bob 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Next generation satellite communication systems require efficient coding schemes that enable high data rates, require low overhead, and have excellent bit error rate performance. A newly rediscovered class of block codes called Low Density Parity Check (LDPC) codes has the potential to revolutionize forward error correction (FEC) because of the very high coding rates. This paper presents a brief overview of LDPC coding and decoding. An LDPC algorithm developed by Goddard Space Flight Center is discussed, and an overview of an accompanying VHDL development by L-3 Communications Cincinnati Electronics is presented.
|
6 |
Detection and Decoding for Magnetic Storage SystemsRadhakrishnan, Rathnakumar January 2009 (has links)
The hard-disk storage industry is at a critical time as the current technologies are incapable of achieving densities beyond 500 Gb/in2, which will be reached in a few years. Many radically new storage architectures have been proposed, which along with advanced signal processing algorithms are expected to achieve much higher densities. In this dissertation, various signal processing algorithms are developed to improve the performance of current and next-generation magnetic storage systems.Low-density parity-check (LDPC) error correction codes are known to provide excellent performance in magnetic storage systems and are likely to replace or supplement currently used algebraic codes. Two methods are described to improve their performance in such systems. In the first method, the detector is modified to incorporate auxiliary LDPC parity checks. Using graph theoretical algorithms, a method to incorporate maximum number of such checks for a given complexity is provided. In the second method, a joint detection and decoding algorithm is developed that, unlike all other schemes, operates on the non-binary channel output symbols rather than input bits. Though sub-optimal, it is shown to provide the best known decoding performance for channels with memory more than 1, which are practically the most important.This dissertation also proposes a ternary magnetic recording system from a signal processing perspective. The advantage of this novel scheme is that it is capable of making magnetic transitions with two different but predetermined gradients. By developing optimal signal processing components like receivers, equalizers and detectors for this channel, the equivalence of this system to a two-track/two-head system is determined and its performance is analyzed. Consequently, it is shown that it is preferable to store information using this system, than to store using a binary system with inter-track interference. Finally, this dissertation provides a number of insights into the unique characteristics of heat-assisted magnetic recording (HAMR) and two-dimensional magnetic recording (TDMR) channels. For HAMR channels, the effects of laser spot on transition characteristics and non-linear transition shift are investigated. For TDMR channels, a suitable channel model is developed to investigate the two-dimensional nature of the noise.
|
7 |
Design of a retransmission strategy for error control in data communication networksJanuary 1976 (has links)
by Seyyed J. Golestaani. / Bibliography: p.107-108. / Prepared under Grant NSF-ENG75-14103. Originally presented as the author's thesis, (M.S.) in the M.I.T. Dept. of Electrical Engineering and Computer Science, 1976.
|
8 |
Uma proposta de um sistema criptografico de chave publica utilizando codigos convolucionais classicos e quanticos / A proposal of a cryptographic system of public key using classical and quantum convolutional codesSantos, Polyane Alves 12 August 2018 (has links)
Orientador: Reginaldo Palazzo Junior / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-12T20:22:38Z (GMT). No. of bitstreams: 1
Santos_PolyaneAlves_M.pdf: 825808 bytes, checksum: f4b4d556a54cfca0cb0a84dd5e07a7a3 (MD5)
Previous issue date: 2008 / Resumo: A proposta de um sistema criptográfico de chave pública que utiliza códigos convolucionais de memória-unitária clássicos e quânticos apresentada neste trabalho, está baseada na utilização de transformações armadilha que, ao serem aplicadas as submatrizes reduzem a capacidade de correção de erros do código. Este processo proporciona um aumento no grau de privacidade da informação a ser enviada devido a dois fatores: para a determinação de códigos ótimos de memória unitária è necessário resolver o Problema da Mochila e a redução da capacidade de correção de erro dos códigos ocasionada pelo embaralhamento das colunas das submatrizes geradoras. São também apresentados neste trabalho, novos códigos convolucionais quânticos concatenados [(4, 1, 3)]. / Abstract: The proposal of a cryptographic system of public key that uses classical and quantum convolutional codes of unit-memory presented in this work, is based on the use of trapdoors functions which when applied to submatrices reduce the capacity of correction of errors of the code. This process gives us an increase in the degree of privacy of information being sent, because of two factors, namely: to establish good unit-memory codes is necessary to solve the knapsack problem, and the reduction of the capacity of correcting errors of codes provided by scrambling the columns of generating submatrices. We also present in this work, news quantum convolutional codes [(4, 1, 3)]. / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
|
9 |
Códigos corretores de erros em espaços posetSantos Neto, Pedro Esperidião dos 14 December 2016 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-17T11:12:12Z
No. of bitstreams: 1
pedroesperidiaodossantosneto.pdf: 630134 bytes, checksum: 788b1bde0483d09ec36a640572f67ad0 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-18T11:49:19Z (GMT) No. of bitstreams: 1
pedroesperidiaodossantosneto.pdf: 630134 bytes, checksum: 788b1bde0483d09ec36a640572f67ad0 (MD5) / Made available in DSpace on 2017-03-18T11:49:20Z (GMT). No. of bitstreams: 1
pedroesperidiaodossantosneto.pdf: 630134 bytes, checksum: 788b1bde0483d09ec36a640572f67ad0 (MD5)
Previous issue date: 2016-12-14 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Opresentetrabalhoversasobrecódigoscorretoresdeerroseseusduaissobreespaçosposet. Inicialmente,veremososconceitosdoqueéumcódigoeautilidadedeumcódigocorretorde erros em um sistema de comunicação e construiremos as principais propriedades de corpos finitos. Estesconceitoscombinadosserãoutilizadosparaaconstruiçãodecódigoscorretores de erros em espaços de Hamming, amplamente aplicados hoje. Em seguida, construiremos os códigos corretores de erros sobre espaços poset e algumas de suas consequências, como o surgimento de códigos P-MDS. Enunciaremos o Teorema da Dualidade para espaços poset e, por fim, analisaremosos códigos do tipo P-cadeia e algumas de suas propriedades provenientes do Teorema da Dualidade. / This piece of work treats of error correcting codes and their dual codes in poset spaces. Initially we will cover the concepts of what is a code and the need of an error correction code in a communication system and the main properties of finite fields. These concepts combined are used for building the error correction codes in Hamming spaces, which are currently largely applied. Poset spaces are proposed as a generalization of the Hamming spaces e we will build the codes over poset spaces and some of their consequences, as the occurrence of P-MDS codes. Then, we will state and prove the Duality Theorem for poset spaces. Lastly, we will analyze the P-chain codes some and of their properties derived from the Duality Theorem.
|
10 |
On feedback-based rateless codes for data collection in vehicular networksHashemi, Morteza 28 October 2015 (has links)
The ability to transfer data reliably and with low delay over an unreliable service is intrinsic to a number of emerging technologies, including digital video broadcasting, over-the-air software updates, public/private cloud storage, and, recently, wireless vehicular networks. In particular, modern vehicles incorporate tens of sensors to provide vital sensor information to electronic control units (ECUs). In the current architecture, vehicle sensors are connected to ECUs via physical wires, which increase the cost, weight and maintenance effort of the car, especially as the number of electronic components keeps increasing. To mitigate the issues with physical wires, wireless sensor networks (WSN) have been contemplated for replacing the current wires with wireless links, making modern cars cheaper, lighter, and more efficient. However, the ability to reliably communicate with the ECUs is complicated by the dynamic channel properties that the car experiences as it travels through areas with different radio interference patterns, such as urban versus highway driving, or even different road quality, which may physically perturb the wireless sensors.
This thesis develops a suite of reliable and efficient communication schemes built upon feedback-based rateless codes, and with a target application of vehicular networks. In particular, we first investigate the feasibility of multi-hop networking for intra-car WSN, and illustrate the potential gains of using the Collection Tree Protocol (CTP), the current state of the art in multi-hop data aggregation. Our results demonstrate, for example, that the packet delivery rate of a node using a single-hop topology protocol can be below 80% in practical scenarios, whereas CTP improves reliability performance beyond 95% across all nodes while simultaneously reducing radio energy consumption. Next, in order to migrate from a wired intra-car network to a wireless system, we consider an intermediate step to deploy a hybrid communication structure, wherein wired and wireless networks coexist. Towards this goal, we design a hybrid link scheduling algorithm that guarantees reliability and robustness under harsh vehicular environments. We further enhance the hybrid link scheduler with the rateless codes such that information leakage to an eavesdropper is almost zero for finite block lengths.
In addition to reliability, one key requirement for coded communication schemes is to achieve a fast decoding rate. This feature is vital in a wide spectrum of communication systems, including multimedia and streaming applications (possibly inside vehicles) with real-time playback requirements, and delay-sensitive services, where the receiver needs to recover some data symbols before the recovery of entire frame. To address this issue, we develop feedback-based rateless codes with dynamically-adjusted nonuniform symbol selection distributions. Our simulation results, backed by analysis, show that feedback information paired with a nonuniform distribution significantly improves the decoding rate compared with the state of the art algorithms. We further demonstrate that amount of feedback sent can be tuned to the specific transmission properties of a given feedback channel.
|
Page generated in 0.1349 seconds