331 |
Optimisation for non-linear channel equalisationSweeney, Fergal Jon January 1999 (has links)
No description available.
|
332 |
A generalised type-II hybrid ARQ scheme with soft-decision decoding /Oduol, Vitalice K. (Vitalice Kalecha) January 1987 (has links)
No description available.
|
333 |
Practical Advances in Quantum Error Correction & CommunicationCriger, Daniel Benjamin January 2013 (has links)
Quantum computing exists at the intersection of mathematics, physics, chemistry, and engineering; the main goal of quantum computing is the creation of devices and algorithms which use the properties of quantum mechanics to store, manipulate and measure information. There exist many families of algorithms, which, using non-classical logical operations, can outperform traditional, classical algorithms in terms of memory and processing requirements. In addition, quantum computing devices are fundamentally smaller than classical processors and memory elements; since the physical models governing their performance are applicable on all scales, as opposed to classical logic elements, whose underlying principles rely on the macroscopic nature of the device in question.
Quantum algorithms, for the most part, are predicated on a theory of resources. It is often assumed that quantum computers can be placed in a precise fiducial state prior to computation, and that logical operations are perfect, inducing no error on the system which they affect. These assumptions greatly simplify algorithmic design, but are fundamentally unrealistic. In order to justify their use, it is necessary to develop a framework for using a large number of imperfect devices to simulate the action of a perfect device, with some acceptable probability of failure. This is the study of fault-tolerant quantum computing. In order to pursue this study effectively, it is necessary to understand the fundamental nature of generic quantum states and operations, as well as the means by which one can correct quantum errors. Additionally, it is important to attempt to minimize the use of computational resources in achieving error reduction and fault-tolerant computing.
This thesis is concerned with three projects related to the use of error-prone quantum systems to transmit and manipulate information. The first of these is concerned with the use of imperfectly-prepared states in error-correction routines. Using optimal quantum error correction, we are able to deduce a method of partially protecting encoded quantum information against preparation errors prior to encoding, using no additional qubits. The second of these projects details the search for entangled states which can be used to transmit classical information over quantum channels at a rate superior to classical states. The third of these projects concerns the transcoding of data from one quantum code into another using few ancillary resources. The descriptions of these projects are preceded by a brief introduction to representations of quantum states and channels, for completeness.
Three techniques of general interest are presented in appendices. The first is an introduction to, and a minor advance in the development of optimal error correction codes. The second is a more efficient means of calculating the action of a quantum channel on a given state, given that the channel acts non-trivially only on a subsystem, rather than the entire system. Finally, we include documentation on a software package developed to aid the search for quantum transcoding operations.
|
334 |
Optimal Points for a Probability Distribution on a Nonhomogeneous Cantor SetRoychowdhury, Lakshmi 1975- 02 October 2013 (has links)
The objective of my thesis is to find optimal points and the quantization error for a probability measure defined on a Cantor set. The Cantor set, we have considered in this work, is generated by two self-similar contraction mappings on the real line with distinct similarity ratios. Then we have defined a nonhomogeneous probability measure, the support of which lies on the Cantor set. For such a probability measure first we have determined the n-optimal points and the nth quantization error for n = 2 and n = 3. Then by some other lemmas and propositions we have proved a theorem which gives all the n-optimal points and the nth quantization error for all positive integers n. In addition, we have given some properties of the optimal points and the quantization error for the probability measure. In the end, we have also given a list of n-optimal points and error for some positive integers n. The result in this thesis is a nonhomogeneous extension of a similar result of Graf and Luschgy in 1997. The techniques in my thesis could be extended to discretise any continuous random variable with another random variable with finite range.
|
335 |
Soft Error Resistant Design of the AES Cipher Using SRAM-based FPGAGhaznavi, Solmaz January 2011 (has links)
This thesis presents a new architecture for the reliable implementation of the symmetric-key algorithm Advanced Encryption Standard (AES) in Field Programmable Gate Arrays (FPGAs). Since FPGAs are prone to soft errors caused by radiation, and AES is highly sensitive to errors, reliable architectures are of significant concern. Energetic particles hitting a device can flip bits in FPGA SRAM cells controlling all aspects of the implementation. Unlike previous research, heterogeneous error detection techniques based on properties of the circuit and functionality are used to provide adequate reliability at the lowest possible cost. The use of dual ported block memory for SubBytes, duplication for the control circuitry, and a new enhanced parity technique for MixColumns is proposed. Previous parity techniques cover single errors in datapath registers, however, soft errors can occur in the control circuitry as well as in SRAM cells forming the combinational logic and routing. In this research, propagation of single errors is investigated in the routed netlist. Weaknesses of the previous parity techniques are identified. Architectural redesign at the register-transfer level is introduced to resolve undetected single errors in both the routing and the combinational logic.
Reliability of the AES implementation is not only a critical issue in large scale FPGA-based systems but also at both higher altitudes and in space applications where there are a larger number of energetic particles. Thus, this research is important for providing efficient soft error resistant design in many current and future secure applications.
|
336 |
A study of the robustness of magic state distillation against Clifford gate faultsJochym-O'Connor, Tomas Raphael January 2012 (has links)
Quantum error correction and fault-tolerance are at the heart of any scalable quantum computation architecture. Developing a set of tools that satisfy the requirements of fault- tolerant schemes is thus of prime importance for future quantum information processing implementations. The Clifford gate set has the desired fault-tolerant properties, preventing bad propagation of errors within encoded qubits, for many quantum error correcting codes, yet does not provide full universal quantum computation. Preparation of magic states can enable universal quantum computation in conjunction with Clifford operations, however preparing magic states experimentally will be imperfect due to implementation errors. Thankfully, there exists a scheme to distill pure magic states from prepared noisy magic states using only operations from the Clifford group and measurement in the Z-basis, such a scheme is called magic state distillation [1]. This work investigates the robustness of magic state distillation to faults in state preparation and the application of the Clifford gates in the protocol. We establish that the distillation scheme is robust to perturbations in the initial state preparation and characterize the set of states in the Bloch sphere that converge to the T-type magic state in different fidelity regimes. Additionally, we show that magic state distillation is robust to low levels of gate noise and that performing the distillation scheme using noisy Clifford gates is a more efficient than using encoded fault-tolerant gates due to the large overhead in fault-tolerant quantum computing architectures.
|
337 |
Burst and compound error correction with cyclic codes.Lewis, David John Head January 1971 (has links)
No description available.
|
338 |
Decoding of linear block codes based on ordered statisticsFossorier, Marc P. C January 1994 (has links)
Thesis (Ph. D.)--University of Hawaii at Manoa, 1994. / Includes bibliographical references (leaves 177-182). / Microfiche. / xvii, 182 leaves, bound ill. 29 cm
|
339 |
Serially concatenated trellis coded modulation /Gray, Paul K. Unknown Date (has links)
Thesis (PhD) -- University of South Australia, 1999
|
340 |
A Rate-Distortion Optimized Multiple Description Video Codec for Error Resilient TransmissionBiswas, Moyuresh , Information Technology & Electrical Engineering, Australian Defence Force Academy, UNSW January 2009 (has links)
The demand for applications like transmission and sharing of video is ever-increasing. Although network resources (bandwidth in particular) and coverage, networking technologies, compression ratio of state-of-the-art video coders have improved, unreliability of the transmission medium prevents us from gaining the most benefit from these applications. This thesis introduces a video coder that is resilient to network failures for transmission applications by using the framework of multiple description coding (MDC). Unlike traditional video coding which compresses the video into single bitstream, in MDC the video is compressed into more than one bitstream which can be independently decoded. It not only averages out the effect of network errors over the bitstreams but it also makes it possible to utilize the multipath nature of most network topologies. An end-to-end rate-distortion optimization is proposed for the codec to make sure that the codec exhibits improved compression performance and that the descriptions are equally efficient to improve the final video quality. An optimized strategy for packetizing the compressed bitstreams of the descriptions is also proposed which guarantees that each packet is self-contained and efficient. The evaluation of the developed MD codec over simulated unreliable packet networks shows that it is possible to achieve improved resilience with the proposed strategies and the end video quality is significantly improved as a result. This is further verified with subjective evaluation over a range of different types of video test sequences.
|
Page generated in 0.0286 seconds