• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 4
  • 4
  • 1
  • 1
  • Tagged with
  • 25
  • 25
  • 23
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Adaptable and enhanced error correction codes for efficient error and defect tolerance in memories

Datta, Rudrajit 31 January 2012 (has links)
Ongoing technology improvements and feature size reduction have led to an increase in manufacturing-induced parameter variations. These variations affect various memory cell circuits, making them unreliable at low voltages. Memories are very dense structures that are especially susceptible to defects, and more so at lower voltages. Transient errors due to radiation, power supply noise, etc., can also cause bit-flips in a memory. To protect the data integrity of the memory, an error correcting code (ECC) is generally employed. Present ECC, however, is either single error correcting or corrects multiple errors at the cost of high redundancy or longer correction time. This research addresses the problem of memory reliability under adverse conditions. The goal is to achieve a desired reliability at reduced redundancy while also keeping in check the correction time. Several methods are proposed here including one that makes use of leftover spare columns/rows in memory arrays [Datta 09] and another one that uses memory characterization tests to customize ECC on a chip by chip basis [Datta 10]. The former demonstrates how reusing spare columns leftover from the memory repair process can help increase code reliability while keeping hardware overhead to a minimum. In the latter case customizing ECCs on a chip by chip basis shows considerable reduction in check bit overhead, at the same time providing a desired level of protection for low voltage operations. The customization is done with help from a defect map generated at manufacturing time, which helps identify potentially vulnerable cells at low voltage. An ECC based solution for tackling the wear out problem of phase change memories (PCM) has also been presented here. To handle the problem of gradual wear out and hence increasing defect rates in PCM systems an adaptive error correction scheme is proposed [Datta 11a]. The adaptive scheme, implemented alongside the operating system seeks to increase PCM lifetime by manifold times. Finally the work on memory ECC is extended by proposing a fast burst error correcting code with minimal overhead for handling scenarios where multi-bit failures are common [Datta 11b]. The twofold goal of this work – design a low-cost code capable of handling multi bit errors affecting adjacent cells, and fast multi bit error correction – is achieved by modifying conventional Orthogonal Latin Square codes into burst error codes. / text
2

Quantum convolutional stabilizer codes

Chinthamani, Neelima 30 September 2004 (has links)
Quantum error correction codes were introduced as a means to protect quantum information from decoherance and operational errors. Based on their approach to error control, error correcting codes can be divided into two different classes: block codes and convolutional codes. There has been significant development towards finding quantum block codes, since they were first discovered in 1995. In contrast, quantum convolutional codes remained mainly uninvestigated. In this thesis, we develop the stabilizer formalism for quantum convolutional codes. We define distance properties of these codes and give a general method for constructing encoding circuits, given a set of generators of the stabilizer of a quantum convolutional stabilizer code, is shown. The resulting encoding circuit enables online encoding of the qubits, i.e., the encoder does not have to wait for the input transmission to end before starting the encoding process. We develop the quantum analogue of the Viterbi algorithm. The quantum Viterbi algorithm (QVA) is a maximum likehood error estimation algorithm, the complexity of which grows linearly with the number of encoded qubits. A variation of the quantum Viterbi algorithm, the Windowed QVA, is also discussed. Using Windowed QVA, we can estimate the most likely error without waiting for the entire received sequence.
3

Quantum convolutional stabilizer codes

Chinthamani, Neelima 30 September 2004 (has links)
Quantum error correction codes were introduced as a means to protect quantum information from decoherance and operational errors. Based on their approach to error control, error correcting codes can be divided into two different classes: block codes and convolutional codes. There has been significant development towards finding quantum block codes, since they were first discovered in 1995. In contrast, quantum convolutional codes remained mainly uninvestigated. In this thesis, we develop the stabilizer formalism for quantum convolutional codes. We define distance properties of these codes and give a general method for constructing encoding circuits, given a set of generators of the stabilizer of a quantum convolutional stabilizer code, is shown. The resulting encoding circuit enables online encoding of the qubits, i.e., the encoder does not have to wait for the input transmission to end before starting the encoding process. We develop the quantum analogue of the Viterbi algorithm. The quantum Viterbi algorithm (QVA) is a maximum likehood error estimation algorithm, the complexity of which grows linearly with the number of encoded qubits. A variation of the quantum Viterbi algorithm, the Windowed QVA, is also discussed. Using Windowed QVA, we can estimate the most likely error without waiting for the entire received sequence.
4

The use of rubrics and correction codes in the marking of Grade 10 Sesotho home language creative writing essays

Sibeko, Johannes January 2015 (has links)
This study investigates the assessment of creative essays in grade 10 Sesotho home language. Nine participants from a total of six schools took part in the research. For the purpose of this study, no literature was found on the assessment of Sesotho essays (or essay writing in any other African language) in general or specific to creative writing in high schools in South Africa. The literature on English first language teaching and English second language teaching were then used to theoretically contextualise the writing and assessment of creative writing essays in Sesotho home language in South African high schools. Data were collected through questionnaires completed by teachers, an analysis of a sample of marked scripts (representing above average, average and below average grades) and interviews with teachers (tailored to investigate the asset of creativity and the aspect of style in Sesotho creative writing essays). The researcher manually coded open-ended responses in the questionnaires. Interview responses were coded with Atlas.ti version 7. Frequencies were calculated for the close-ended questions in the questionnaire. Participating teachers perceived their assessment of essays with the use of the rubric and the correction to be standardised. This was evident in their awarding of marks. It was found in this study that teachers generally award marks around 60%. However, their report that they use comments as per their responses in the questionnaire was disproven by the lack of comments in the scripts analysed in this study. There was also no relationship observed between the correction code frequencies observed in the marked essays that were analysed and the marks granted for specific sections of the rubric. This study recommends use of the rubric in earlier drafts of the writing process. In addition, it proposes an expansion of the marking grid used to provide clearer feedback via the revised rubric to the learners. Due to the participating teachers’ evident lack of clarity on what style in Sesotho home language essays entail, it was inferred that teachers are not clear on the distinctions between different essay assessment criteria in the rubric. A recommendation was the development of a rubric guide, which would clearly indicate to teachers what each criterion of the rubric assesses.
5

The use of rubrics and correction codes in the marking of Grade 10 Sesotho home language creative writing essays

Sibeko, Johannes January 2015 (has links)
This study investigates the assessment of creative essays in grade 10 Sesotho home language. Nine participants from a total of six schools took part in the research. For the purpose of this study, no literature was found on the assessment of Sesotho essays (or essay writing in any other African language) in general or specific to creative writing in high schools in South Africa. The literature on English first language teaching and English second language teaching were then used to theoretically contextualise the writing and assessment of creative writing essays in Sesotho home language in South African high schools. Data were collected through questionnaires completed by teachers, an analysis of a sample of marked scripts (representing above average, average and below average grades) and interviews with teachers (tailored to investigate the asset of creativity and the aspect of style in Sesotho creative writing essays). The researcher manually coded open-ended responses in the questionnaires. Interview responses were coded with Atlas.ti version 7. Frequencies were calculated for the close-ended questions in the questionnaire. Participating teachers perceived their assessment of essays with the use of the rubric and the correction to be standardised. This was evident in their awarding of marks. It was found in this study that teachers generally award marks around 60%. However, their report that they use comments as per their responses in the questionnaire was disproven by the lack of comments in the scripts analysed in this study. There was also no relationship observed between the correction code frequencies observed in the marked essays that were analysed and the marks granted for specific sections of the rubric. This study recommends use of the rubric in earlier drafts of the writing process. In addition, it proposes an expansion of the marking grid used to provide clearer feedback via the revised rubric to the learners. Due to the participating teachers’ evident lack of clarity on what style in Sesotho home language essays entail, it was inferred that teachers are not clear on the distinctions between different essay assessment criteria in the rubric. A recommendation was the development of a rubric guide, which would clearly indicate to teachers what each criterion of the rubric assesses.
6

High-Performance Decoder Architectures For Low-Density Parity-Check Codes

Zhang, Kai 09 January 2012 (has links)
The Low-Density Parity-Check (LDPC) codes, which were invented by Gallager back in 1960s, have attracted considerable attentions recently. Compared with other error correction codes, LDPC codes are well suited for wireless, optical, and magnetic recording systems due to their near- Shannon-limit error-correcting capacity, high intrinsic parallelism and high-throughput potentials. With these remarkable characteristics, LDPC codes have been adopted in several recent communication standards such as 802.11n (Wi-Fi), 802.16e (WiMax), 802.15.3c (WPAN), DVB-S2 and CMMB. This dissertation is devoted to exploring efficient VLSI architectures for high-performance LDPC decoders and LDPC-like detectors in sparse inter-symbol interference (ISI) channels. The performance of an LDPC decoder is mainly evaluated by area efficiency, error-correcting capability, throughput and rate flexibility. With this work we investigate tradeoffs between the four performance aspects and develop several decoder architectures to improve one or several performance aspects while maintaining acceptable values for other aspects. Firstly, we present a high-throughput decoder design for the Quasi-Cyclic (QC) LDPC codes. Two new techniques are proposed for the first time, including parallel layered decoding architecture (PLDA) and critical path splitting. Parallel layered decoding architecture enables parallel processing for all layers by establishing dedicated message passing paths among them. The decoder avoids crossbar-based large interconnect network. Critical path splitting technique is based on articulate adjustment of the starting point of each layer to maximize the time intervals between adjacent layers, such that the critical path delay can be split into pipeline stages. Furthermore, min-sum and loosely coupled algorithms are employed for area efficiency. As a case study, a rate-1/2 2304-bit irregular LDPC decoder is implemented using ASIC design in 90 nm CMOS process. The decoder can achieve an input throughput of 1.1 Gbps, that is, 3 or 4 times improvement over state-of-art LDPC decoders, while maintaining a comparable chip size of 2.9 mm^2. Secondly, we present a high-throughput decoder architecture for rate-compatible (RC) LDPC codes which supports arbitrary code rates between the rate of mother code and 1. While the original PLDA is lack of rate flexibility, the problem is solved gracefully by incorporating the puncturing scheme. Simulation results show that our selected puncturing scheme only introduces the BER performance degradation of less than 0.2dB, compared with the dedicated codes for different rates specified in the IEEE 802.16e (WiMax) standard. Subsequently, PLDA is employed for high throughput decoder design. As a case study, a RC- LDPC decoder based on the rate-1/2 WiMax LDPC code is implemented in CMOS 90 nm process. The decoder can achieve an input throughput of 975 Mbps and supports any rate between 1/2 and 1. Thirdly, we develop a low-complexity VLSI architecture and implementation for LDPC decoder used in China Multimedia Mobile Broadcasting (CMMB) systems. An area-efficient layered decoding architecture based on min-sum algorithm is incorporated in the design. A novel split-memory architecture is developed to efficiently handle the weight-2 submatrices that are rarely seen in conventional LDPC decoders. In addition, the check-node processing unit is highly optimized to minimize complexity and computing latency while facilitating a reconfigurable decoding core. Finally, we propose an LDPC-decoder-like channel detector for sparse ISI channels using belief propagation (BP). The BP-based detection computationally depends on the number of nonzero interferers only and are thus more suited for sparse ISI channels which are characterized by long delay but a small fraction of nonzero interferers. Layered decoding algorithm, which is popular in LDPC decoding, is also adopted in this paper. Simulation results show that the layered decoding doubles the convergence speed of the iterative belief propagation process. Exploring the special structure of the connections between the check nodes and the variable nodes on the factor graph, we propose an effective detector architecture for generic sparse ISI channels to facilitate the practical application of the proposed detection algorithm. The proposed architecture is also reconfigurable in order to switch flexible connections on the factor graph in the time-varying ISI channels.
7

LOW DENSITY PARITY CHECK CODES FOR TELEMETRY APPLICATIONS

Hayes, Bob 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Next generation satellite communication systems require efficient coding schemes that enable high data rates, require low overhead, and have excellent bit error rate performance. A newly rediscovered class of block codes called Low Density Parity Check (LDPC) codes has the potential to revolutionize forward error correction (FEC) because of the very high coding rates. This paper presents a brief overview of LDPC coding and decoding. An LDPC algorithm developed by Goddard Space Flight Center is discussed, and an overview of an accompanying VHDL development by L-3 Communications Cincinnati Electronics is presented.
8

Detection and Decoding for Magnetic Storage Systems

Radhakrishnan, Rathnakumar January 2009 (has links)
The hard-disk storage industry is at a critical time as the current technologies are incapable of achieving densities beyond 500 Gb/in2, which will be reached in a few years. Many radically new storage architectures have been proposed, which along with advanced signal processing algorithms are expected to achieve much higher densities. In this dissertation, various signal processing algorithms are developed to improve the performance of current and next-generation magnetic storage systems.Low-density parity-check (LDPC) error correction codes are known to provide excellent performance in magnetic storage systems and are likely to replace or supplement currently used algebraic codes. Two methods are described to improve their performance in such systems. In the first method, the detector is modified to incorporate auxiliary LDPC parity checks. Using graph theoretical algorithms, a method to incorporate maximum number of such checks for a given complexity is provided. In the second method, a joint detection and decoding algorithm is developed that, unlike all other schemes, operates on the non-binary channel output symbols rather than input bits. Though sub-optimal, it is shown to provide the best known decoding performance for channels with memory more than 1, which are practically the most important.This dissertation also proposes a ternary magnetic recording system from a signal processing perspective. The advantage of this novel scheme is that it is capable of making magnetic transitions with two different but predetermined gradients. By developing optimal signal processing components like receivers, equalizers and detectors for this channel, the equivalence of this system to a two-track/two-head system is determined and its performance is analyzed. Consequently, it is shown that it is preferable to store information using this system, than to store using a binary system with inter-track interference. Finally, this dissertation provides a number of insights into the unique characteristics of heat-assisted magnetic recording (HAMR) and two-dimensional magnetic recording (TDMR) channels. For HAMR channels, the effects of laser spot on transition characteristics and non-linear transition shift are investigated. For TDMR channels, a suitable channel model is developed to investigate the two-dimensional nature of the noise.
9

Design of a retransmission strategy for error control in data communication networks

January 1976 (has links)
by Seyyed J. Golestaani. / Bibliography: p.107-108. / Prepared under Grant NSF-ENG75-14103. Originally presented as the author's thesis, (M.S.) in the M.I.T. Dept. of Electrical Engineering and Computer Science, 1976.
10

Uma proposta de um sistema criptografico de chave publica utilizando codigos convolucionais classicos e quanticos / A proposal of a cryptographic system of public key using classical and quantum convolutional codes

Santos, Polyane Alves 12 August 2018 (has links)
Orientador: Reginaldo Palazzo Junior / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-12T20:22:38Z (GMT). No. of bitstreams: 1 Santos_PolyaneAlves_M.pdf: 825808 bytes, checksum: f4b4d556a54cfca0cb0a84dd5e07a7a3 (MD5) Previous issue date: 2008 / Resumo: A proposta de um sistema criptográfico de chave pública que utiliza códigos convolucionais de memória-unitária clássicos e quânticos apresentada neste trabalho, está baseada na utilização de transformações armadilha que, ao serem aplicadas as submatrizes reduzem a capacidade de correção de erros do código. Este processo proporciona um aumento no grau de privacidade da informação a ser enviada devido a dois fatores: para a determinação de códigos ótimos de memória unitária è necessário resolver o Problema da Mochila e a redução da capacidade de correção de erro dos códigos ocasionada pelo embaralhamento das colunas das submatrizes geradoras. São também apresentados neste trabalho, novos códigos convolucionais quânticos concatenados [(4, 1, 3)]. / Abstract: The proposal of a cryptographic system of public key that uses classical and quantum convolutional codes of unit-memory presented in this work, is based on the use of trapdoors functions which when applied to submatrices reduce the capacity of correction of errors of the code. This process gives us an increase in the degree of privacy of information being sent, because of two factors, namely: to establish good unit-memory codes is necessary to solve the knapsack problem, and the reduction of the capacity of correcting errors of codes provided by scrambling the columns of generating submatrices. We also present in this work, news quantum convolutional codes [(4, 1, 3)]. / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica

Page generated in 0.111 seconds