• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 15
  • 14
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 98
  • 98
  • 42
  • 23
  • 17
  • 16
  • 14
  • 11
  • 10
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Correlation-based Cross-layer Communication in Wireless Sensor Networks

Vuran, Mehmet Can 09 July 2007 (has links)
Wireless sensor networks (WSN) are event based systems that rely on the collective effort of densely deployed sensor nodes continuously observing a physical phenomenon. The spatio-temporal correlation between the sensor observations and the cross-layer design advantages are significant and unique to the design of WSN. Due to the high density in the network topology, sensor observations are highly correlated in the space domain. Furthermore, the nature of the energy-radiating physical phenomenon constitutes the temporal correlation between each consecutive observation of a sensor node. This unique characteristic of WSN can be exploited through a cross-layer design of communication functionalities to improve energy efficiency of the network. In this thesis, several key elements are investigated to capture and exploit the correlation in the WSN for the realization of advanced efficient communication protocols. A theoretical framework is developed to capture the spatial and temporal correlations in WSN and to enable the development of efficient communication protocols. Based on this framework, spatial Correlation-based Collaborative Medium Access Control (CC-MAC) protocol is described, which exploits the spatial correlation in the WSN in order to achieve efficient medium access. Furthermore, the cross-layer module (XLM), which melts common protocol layer functionalities into a cross-layer module for resource-constrained sensor nodes, is developed. The cross-layer analysis of error control in WSN is then presented to enable a comprehensive comparison of error control schemes for WSN. Finally, the cross-layer packet size optimization framework is described.
82

Modern Error Control Codes and Applications to Distributed Source Coding

Sartipi, Mina 15 August 2006 (has links)
This dissertation first studies two-dimensional wavelet codes (TDWCs). TDWCs are introduced as a solution to the problem of designing a 2-D code that has low decoding- complexity and has the maximum erasure-correcting property for rectangular burst erasures. The half-rate TDWCs of dimensions N<sub>1</sub> X N<sub>2</sub> satisfy the Reiger bound with equality for burst erasures of dimensions N<sub>1</sub> X N<sub>2</sub>/2 and N<sub>1</sub>/2 X N<sub>2</sub>, where GCD(N<sub>1</sub>,N<sub>2</sub>) = 2. Examples of TDWC are provided that recover any rectangular burst erasure of area N<sub>1</sub>N<sub>2</sub>/2. These lattice-cyclic codes can recover burst erasures with a simple and efficient ML decoding. This work then studies the problem of distributed source coding for two and three correlated signals using channel codes. We propose to model the distributed source coding problem with a set of parallel channel that simplifies the distributed source coding to de- signing non-uniform channel codes. This design criterion improves the performance of the source coding considerably. LDPC codes are used for lossless and lossy distributed source coding, when the correlation parameter is known or unknown at the time of code design. We show that distributed source coding at the corner point using LDPC codes is simplified to non-uniform LDPC code and semi-random punctured LDPC codes for a system of two and three correlated sources, respectively. We also investigate distributed source coding at any arbitrary rate on the Slepian-Wolf rate region. This problem is simplified to designing a rate-compatible LDPC code that has unequal error protection property. This dissertation finally studies the distributed source coding problem for applications whose wireless channel is an erasure channel with unknown erasure probability. For these application, rateless codes are better candidates than LDPC codes. Non-uniform rateless codes and improved decoding algorithm are proposed for this purpose. We introduce a reliable, rate-optimal, and energy-efficient multicast algorithm that uses distributed source coding and rateless coding. The proposed multicast algorithm performs very close to network coding, while it has lower complexity and higher adaptability.
83

ECC Video: An Active Second Error Control Approach for Error Resilience in Video Coding

Du, Bing Bing January 2003 (has links)
To support video communication over mobile environments has been one of the objectives of many engineers of telecommunication networks and it has become a basic requirement of a third generation of mobile communication systems. This dissertation explores the possibility of optimizing the utilization of shared scarce radio channels for live video transmission over a GSM (Global System for Mobile telecommunications) network and realizing error resilient video communication in unfavorable channel conditions, especially in mobile radio channels. The main contribution describes the adoption of a SEC (Second Error Correction) approach using ECC (Error Correction Coding) based on a Punctured Convolutional Coding scheme, to cope with residual errors at the application layer and enhance the error resilience of a compressed video bitstream. The approach is developed further for improved performance in different circumstances, with some additional enhancements involving Intra Frame Relay and Interleaving, and the combination of the approach with Packetization. Simulation results of applying the various techniques to test video sequences Akiyo and Salesman are presented and analyzed for performance comparisons with conventional video coding standard. The proposed approach shows consistent improvements under these conditions. For instance, to cope with random residual errors, the simulation results show that when the residual BER (Bit Error Rate) reaches 10-4, the video output reconstructed from a video bitstream protected using the standard resynchronization approach is of unacceptable quality, while the proposed scheme can deliver a video output which is absolutely error free in a more efficient way. When the residual BER reaches 10-3, the standard approach fails to deliver a recognizable video output, while the SEC scheme can still correct all the residual errors with modest bit rate increase. In bursty residual error conditions, the proposed scheme also outperforms the resynchronization approach. Future works to extend the scope and applicability of the research are suggested in the last chapter of the thesis.
84

A Modified Sum-Product Algorithm over Graphs with Short Cycles

Raveendran, Nithin January 2015 (has links) (PDF)
We investigate into the limitations of the sum-product algorithm for binary low density parity check (LDPC) codes having isolated short cycles. Independence assumption among messages passed, assumed reasonable in all configurations of graphs, fails the most in graphical structures with short cycles. This research work is a step forward towards understanding the effect of short cycles on error floors of the sum-product algorithm. We propose a modified sum-product algorithm by considering the statistical dependency of the messages passed in a cycle of length 4. We also formulate a modified algorithm in the log domain which eliminates the numerical instability and precision issues associated with the probability domain. Simulation results show a signal to noise ratio (SNR) improvement for the modified sum-product algorithm compared to the original algorithm. This suggests that dependency among messages improves the decisions and successfully mitigates the effects of length-4 cycles in the Tanner graph. The improvement is significant at high SNR region, suggesting a possible cause to the error floor effects on such graphs. Using density evolution techniques, we analysed the modified decoding algorithm. The threshold computed for the modified algorithm is higher than the threshold computed for the sum-product algorithm, validating the observed simulation results. We also prove that the conditional entropy of a codeword given the estimate obtained using the modified algorithm is lower compared to using the original sum-product algorithm.
85

On Codes for Private Information Retrieval and Ceph Implementation of a High-Rate Regenerating Code

Vinayak, R January 2017 (has links) (PDF)
Error-control codes, which are being extensively used in communication systems, have found themselves very useful in data storage as well during the past decade. This thesis deals with two types of codes for data storage, one pertaining to the issue of privacy and the other to reliability. In many scenarios, user accessing some critical data from a server would not want the server to learn the identity of data retrieved. This problem, called Private Information Retrieval (PIR) was rst formally introduced by Chor et al and they gave protocols for PIR in the case where multiple copies of the same data is stored in non-communicating servers. The PIR protocols that came up later also followed this replication model. The problem with data replication is the high storage overhead involved, which will lead to large storage costs. Later, Fazeli, Vardy and Yaakobi, came up with the notion of PIR code that enables information-theoretic PIR with low storage overhead. In the rst part of this thesis, construction of PIR codes for certain parameter values is presented. These constructions are based on a variant of conventional Reed-Muller (RM) codes called binary Projective Reed-Muller (PRM) codes. A lower bound on block length of systematic PIR codes is derived and the PRM based PIR codes are shown to be optimal with respect to this bound in some special cases. The codes constructed here have smaller block lengths than the short block length PIR codes known in the literature. The generalized Hamming weights of binary PRM codes are also studied. Another work described here is the implementation and evaluation of an erasure code called Coupled Layer (CL) code in Ceph distributed storage system. Erasure codes are used in distributed storage to ensure reliability. An additional desirable feature required for codes used in this setting is the ability to handle node repair efficiently. The Minimum Storage Regenerating (MSR) version of CL code downloads optimal amount of data from other nodes during repair of a failed node and even disk reads during this process is optimum, for that storage overhead. The CL-Near-MSR code, which is a variant of CL-MSR, can efficiently handle a restricted set of multiple node failures also. Four example CL codes were evaluated using a 26 node Amazon cluster and performance metrics like network bandwidth, disk read and repair time were measured. Repair time reduction of the order of 3 was observed for one of those codes, in comparison with Reed Solomon code having same parameters. To the best of our knowledge, such large gains in repair performance have never been demonstrated before.
86

Receptores iterativos para canais de acesso múltiplo ruidosos com N frequências e T usuários / Iterative receivers for an N frequency T users multiple acess channel with noise

Sharma, Manish 17 August 2018 (has links)
Orientador: Jaime Portugheis / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-17T00:51:09Z (GMT). No. of bitstreams: 1 Sharma_Manish_D.pdf: 1122815 bytes, checksum: ac184067a2eeb2f29617e0a5da608708 (MD5) Previous issue date: 2010 / Resumo: O objetivo deste trabalho é analisar o desempenho da recepção e detecção conjunta e iterativa para canais de acesso múltiplo. A análise se concentrou em torno de um canal ruidoso com N frequências compartilhado por T usuários. Encontramos valores para a capacidade do canal para detecção conjunta e individual. Embora a eficiência espectral do sistema seja relativamente baixa, a combinação deste fator com uma grande faixa de frequências permite altas taxas de transmissão com baixa relação sinal ruído. O receptor foi modelado como um grafo de fatores e foi analisado através de curvas EXIT, que também são utilizadas para otimizar os códigos corretores de erro dos usuários. Propomos alguns sistemas baseados nesta técnica e simulamos a sua probabilidade de erro de bit. Os resultados indicam que é possível transmitir informação com taxas próximas da capacidade do canal. Tanto o grafo do receptor como as análises subsequentes podem ser aplicadas para outros canais de acesso múltiplo, especialmente para sistemas com N símbolos de transmissão ortogonais. / Abstract: The aim of this work is to analyze the performance of iterative joint reception and detection for multi-user channels. The analysis is centered around an N-frequency MFSK noisy channel shared by T users. Channel capacity values are obtained for joint and single user detection. Although the system's spectral efficiency is low, high rates at low signal to noise ratio are achievable by using a wide-bandwidth channel. The receiver is modeled as a factor graph and analyzed by its EXIT charts, which were also used to analyze the users' error correcting codes. Some systems are proposed and simulated to obtain the bit error probability. Results indicate that it is possible to transmit information with rates close to channel capacity. The proposed receiver and the performed analysis can be applied to other types of multiple access channels, in particular for systems with N orthogonal transmission symbols. / Doutorado / Telecomunicações e Telemática / Doutor em Engenharia Elétrica
87

A Posteriori Error Analysis of Discontinuous Galerkin Methods for Elliptic Variational Inequalities

Porwal, Kamana January 2014 (has links) (PDF)
The main emphasis of this thesis is to study a posteriori error analysis of discontinuous Galerkin (DG) methods for the elliptic variational inequalities. The DG methods have become very pop-ular in the last two decades due to its nature of handling complex geometries, allowing irregular meshes with hanging nodes and different degrees of polynomial approximation on different ele-ments. Moreover they are high order accurate and stable methods. Adaptive algorithms refine the mesh locally in the region where the solution exhibits irregular behaviour and a posteriori error estimates are the main ingredients to steer the adaptive mesh refinement. The solution of linear elliptic problem exhibits singularities due to change in boundary con-ditions, irregularity of coefficients and reentrant corners in the domain. Apart from this, the solu-tion of variational inequality exhibits additional irregular behaviour due to occurrence of the free boundary (the part of the domain which is a priori unknown and must be found as a component of the solution). In the lack of full elliptic regularity of the solution, uniform refinement is inefficient and it does not yield optimal convergence rate. But adaptive refinement, which is based on the residuals ( or a posteriori error estimator) of the problem, enhance the efficiency by refining the mesh locally and provides the optimal convergence. In this thesis, we derive a posteriori error estimates of the DG methods for the elliptic variational inequalities of the first kind and the second kind. This thesis contains seven chapters including an introductory chapter and a concluding chap-ter. In the introductory chapter, we review some fundamental preliminary results which will be used in the subsequent analysis. In Chapter 2, a posteriori error estimates for a class of DG meth-ods have been derived for the second order elliptic obstacle problem, which is a prototype for elliptic variational inequalities of the first kind. The analysis of Chapter 2 is carried out for the general obstacle function therefore the error estimator obtained therein involves the min/max func-tion and hence the computation of the error estimator becomes a bit complicated. With a mild assumption on the trace of the obstacle, we have derived a significantly simple and easily com-putable error estimator in Chapter 3. Numerical experiments illustrates that this error estimator indeed behaves better than the error estimator derived in Chapter 2. In Chapter 4, we have carried out a posteriori analysis of DG methods for the Signorini problem which arises from the study of the frictionless contact problems. A nonlinear smoothing map from the DG finite element space to conforming finite element space has been constructed and used extensively, in the analysis of Chapter 2, Chapter 3 and Chapter 4. Also, a common property shared by all DG methods allows us to carry out the analysis in unified setting. In Chapter 5, we study the C0 interior penalty method for the plate frictional contact problem, which is a fourth order variational inequality of the second kind. In this chapter, we have also established the medius analysis along with a posteriori analy-sis. Numerical results have been presented at the end of every chapter to illustrate the theoretical results derived in respective chapters. We discuss the possible extension and future proposal of the work presented in the Chapter 6. In the last chapter, we have documented the FEM codes used in the numerical experiments.
88

Low Overhead Soft Error Mitigation Methodologies

Prasanth, V January 2012 (has links) (PDF)
CMOS technology scaling is bringing new challenges to the designers in the form of new failure modes. The challenges include long term reliability failures and particle strike induced random failures. Studies have shown that increasingly, the largest contributor to the device reliability failures will be soft errors. Due to reliability concerns, the adoption of soft error mitigation techniques is on the increase. As the soft error mitigation techniques are increasingly adopted, the area and performance overhead incurred in their implementation also becomes pertinent. This thesis addresses the problem of providing low cost soft error mitigation. The main contributions of this thesis include, (i) proposal of a new delayed capture methodology for low overhead soft error detection, (ii) adopting Error Control Coding (ECC) for delayed capture methodology for correction of single event upsets, (iii) analyzing the impact of different derating factors to reduce the hardware overhead incurred by the above implementations, and (iv) proposal for hardware software co-design for reliability based upon critical component identification determined by the application executing on the hardware (as against standalone hardware analysis). This thesis first surveys existing soft error mitigation techniques and their associated limitations. It proposes a new delayed capture methodology as a low overhead soft error detection technique. Delayed capture methodology is an enhancement of the Razor flip-flop methodology. In the delayed capture methodology, the parity for a set of flip-flops is calculated at their inputs and outputs. The input parity is latched on a second clock, which is delayed with respect to the functional clock by more than the soft error pulse width. It requires an extra flip-flop for each set of flip-flops. On the other hand, in the Razor flip-flop methodology an additional flip-flop is required for every functional flip-flop. Due to the skew in the clocks, either the parity flip-flop or the functional flip-flop will capture the effect of transient, and hence by comparing the output parity and latched input parity an error can be detected. Fault injection experiments are performed to evaluate the bneefits and limitations of the proposed approach. The limitations include soft error detection escapes and lack of error correction capability. Different cases of soft error detection escapes are analyzed. They are attributed mainly to a Single Event Upset (SEU) causing multiple flip-flops within a group to be in error. The error space due to SEUs is analyzed and an intelligent flip-flop grouping method using graph theoretic formulations is proposed such that no SEU can cause multiple flip-flops within a group to be in error. Once the error occurs, leaving the correction aspects to the application may not be desirable. The proposed delayed capture methodology is extended to replace parity codes with codes having higher redundancy to enable correction. The hardware overhead due to the proposed methodology is analyzed and an area savings of about 15% is obtained when compared to an existing soft error mitigation methodology with equivalent coverage. The impact of different derating factors in determining the hardware overhead due to the soft error mitigation methodology is then analyzed. We have considered electrical derating and timing derating information for the evaluation purpose. The area overhead of the circuit with implementation of delayed capture methodology, considering different derating factors standalone and in combination is then analyzed. Results indicate that in different circuits, either a combination of these derating factors yield optimal results, or each of them considered standalone. This is due to the dependency of the solution on the heuristic nature of the algorithms used. About 23% area savings are obtained by employing these derating factors for a more optimal grouping of flip-flops. A new paradigm of hardware software co-design for reliability is finally proposed. This is based on application derating in which the application / firmware code is profiled to identify the critical components which must be guarded from soft errors. This identification is based on the ability of the application software to tolerate certain errors in hardware. An algorithm to identify critical components in the control logic based on fault injection is developed. Experimental results indicated that for a safety critical automotive application, only 12% of the sequential logic elements were found to be critical. This approach provides a framework for investigating how software methods can complement hardware methods, to provide a reduced hardware solution for soft error mitigation.
89

Error control with binary cyclic codes

Grymel, Martin-Thomas January 2013 (has links)
Error-control codes provide a mechanism to increase the reliability of digital data being processed, transmitted, or stored under noisy conditions. Cyclic codes constitute an important class of error-control code, offering powerful error detection and correction capabilities. They can easily be generated and verified in hardware, which makes them particularly well suited to the practical use as error detecting codes.A cyclic code is based on a generator polynomial which determines its properties including the specific error detection strength. The optimal choice of polynomial depends on many factors that may be influenced by the underlying application. It is therefore advantageous to employ programmable cyclic code hardware that allows a flexible choice of polynomial to be applied to different requirements. A novel method is presented in this thesis to realise programmable cyclic code circuits that are fast, energy-efficient and minimise implementation resources.It can be shown that the correction of a single-bit error on the basis of a cyclic code is equivalent to the solution of an instance of the discrete logarithm problem. A new approach is proposed for computing discrete logarithms; this leads to a generic deterministic algorithm for analysed group orders that equal Mersenne numbers with an exponent of a power of two. The algorithm exhibits a worst-case runtime in the order of the square root of the group order and constant space requirements.This thesis establishes new relationships for finite fields that are represented as the polynomial ring over the binary field modulo a primitive polynomial. With a subset of these properties, a novel approach is developed for the solution of the discrete logarithm in the multiplicative groups of these fields. This leads to a deterministic algorithm for small group orders that has linear space and linearithmic time requirements in the degree of defining polynomial, enabling an efficient correction of single-bit errors based on the corresponding cyclic codes.
90

On The Fourier Transform Approach To Quantum Error Control

Kumar, Hari Dilip 07 1900 (has links) (PDF)
Quantum mechanics is the physics of the very small. Quantum computers are devices that utilize the power of quantum mechanics for their computational primitives. Associated to each quantum system is an abstract space known as the Hilbert space. A subspace of the Hilbert space is known as a quantum code. Quantum codes allow to protect the computational state of a quantum computer against decoherence errors. The well-known classes of quantum codes are stabilizer or additive codes, non-additive codes and Clifford codes. This thesis aims at demonstrating a general approach to the construction of the various classes of quantum codes. The framework utilized is the Fourier transform over finite groups. The thesis is divided into four chapters. The first chapter is an introduction to basic quantum mechanics, quantum computation and quantum noise. It lays the foundation for an understanding of quantum error correction theory in the next chapter. The second chapter introduces the basic theory behind quantum error correction. Also, the various classes and constructions of active quantum error-control codes are introduced. The third chapter introduces the Fourier transform over finite groups, and shows how it may be used to construct all the known classes of quantum codes, as well as a class of quantum codes as yet unpublished in the literature. The transform domain approach was originally introduced in (Arvind et al., 2002). In that paper, not all the classes of quantum codes were introduced. We elaborate on this work to introduce the other classes of quantum codes, along with a new class of codes, codes from idempotents in the transform domain. The fourth chapter details the computer programs that were used to generate and test for the various code classes. Code was written in the GAP (Groups, Algorithms, Programming) computer algebra package. The fifth and final chapter concludes, with possible directions for future work. References cited in the thesis are attached at the end of the thesis.

Page generated in 0.0808 seconds