• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 751
  • 194
  • 183
  • 159
  • 42
  • 34
  • 22
  • 20
  • 16
  • 14
  • 14
  • 9
  • 9
  • 9
  • 9
  • Tagged with
  • 1994
  • 506
  • 459
  • 420
  • 388
  • 320
  • 252
  • 222
  • 180
  • 149
  • 148
  • 134
  • 129
  • 126
  • 121
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Coordination d’appareils autonomes sur canaux bruités : régions de capacité et algorithmes de codage / Coordination of autonomous devices over noisy channels : capacity results and coding techniques

Cervia, Giulia 30 November 2018 (has links)
Les réseaux de 5ème génération se caractérisent par la communication directe entre machines (M2M) et l’Internet des Objets, un réseau unifié d’objets connectés. Dans ce contexte, les appareils communicants sont des décideurs autonomes qui coopérent, coordonnent leurs actions et se reconfigurent de manière dynamique enfonction de leur environnement. L’enjeu est de développer des algorithmes efficaces pour coordonner les actions des appareils autonomes constituant le réseau.La théorie de l’information nous permet d’étudier le comportement de long-terme des appareils grâce aux distributions de probabilité conjointes. En particulier, nous sommes intéressés par la coordination forte, qui exige que la distribution induite sur les suites d’actions converge en distance L^1 vers une distribution i.i.d. cible.Nous considérons un model point-à-point composé d’une source d’information, d’un encodeur, d’un canal bruité, d’un décodeur, d’une information commune et nous cherchons à coordonner les signaux en entrée et en sortie du canal avec la source et sa reconstruction.Nos premiers résultats sont des bornes intérieures et extérieure pour la région de coordination forte, c’est-à-dire l’ensemble des distributions de probabilité conjointes réalisables et la quantité d’information commune requise.Ensuite, nous caractérisons cette région de coordination forte dans trois cas particuliers: lorsque le canal est parfait, lorsque le décodeur est sans perte et lorsque les variables aléatoires du canal sont indépendantes des variables aléatoires de la source. L’étude de ce dernier cas nous permet de remettre en cause le principe de séparation source-canal pour la coordination forte. Nous démontrons également que la coordination forte offre “gratuitement” des garanties de sécurité au niveau de la couche physique.Par ailleurs, nous étudions la coordination sous l’angle du codage polaire afin de développer des algorithmes de codage implémentables. Nous appliquons la polarisation de la source de manière à créer un schéma de codage explicite qui offre une alternative constructive aux preuves de codage aléatoires. / 5G networks will be characterized by machine to machine communication and the Internet of Things, a unified network of connected objects. In this context, communicating devices are autonomous decision-makers that cooperate, coordinate their actions, and reconfigure dynamically according to changes in the environment.To do this, it is essential to develop effective techniques for coordinating the actions of the nodes in the network.Information theory allows us to study the long-term behavior of the devices through the analysis of the joint probability distribution of their actions. In particular, we are interested in strong coordination, which requires the joint distribution of sequences of actions to converge to an i.i.d. target distribution in L^1 distance.We consider a two-node network comprised of an information source and a noisy channel, and we require the coordination of the signals at the input and at the output of the channel with the source and the reconstruction. We assume that the encoder and decoder share a common source of randomness and we introduce a state capturing theeffect of the environment.The first objective of this work is to characterize the strong coordination region, i.e. the set of achievable joint behaviors and the required minimal rates of common randomness. We prove inner and outer bounds for this region. Then, we characterize the exact coordination region in three particular cases: when the channel is perfect, when the decoder is lossless and when the random variables of the channel are separated from the random variables of the source.The study of the latter case allows us to show that the joint source-channel separation principle does not hold for strong coordination. Moreover, we prove that strong coordination offers “free” security guarantees at the physical layer.The second objective of this work is to develop practical codes for coordination: by exploiting the technique of source polarization, we design an explicit coding scheme for coordination, providing a constructive alternative to random coding proofs.
52

Involution Codes with Application to DNA Strand Design

Mahalingam, Kalpana 01 July 2004 (has links)
The set of all sequences that are generated by a bio-molecular protocol forms a language over the four letter alphabet Delta = [A,G,C,T]. This alphabet is associated with natural involution mapping Theta, A maps to T and G maps to C which is an antimorphism of Delta* In order to avoid undesirable Watson-Crick bonds between the words the language has to satisfy certain coding properties. Hence for an involution Theta we consider involution codes: Theta-infix, Theta-comma-free, Theta-k-codes and Theta-subword-k-codes which avoid certain undesirable hybridization. We investigate the closure properties of these codes and also the conditions under which both X and X+ are the same type of involution codes. We provide properties of the splicing system such that the language generated by the system preserves the desired properties of code words. Algebraic characterizations of these involutions through their syntactic monoids have also been discussed. Methods of constructing involution codes that are strictly locally testable are also given. General methods for generating such involution codes are given and teh information capacity of these codes show to be optimal in most cases. A specific set of these codes were chosen for experimental testing and the results of these experiments are presented.
53

Hardware Accelerator for Duo-binary CTC decoding : Algorithm Selection, HW/SW Partitioning and FPGA Implementation

Bjärmark, Joakim, Strandberg, Marco January 2006 (has links)
<p>Wireless communication is always struggling with errors in the transmission. The digital data received from the radio channel is often erroneous due to thermal noise and fading. The error rate can be lowered by using higher transmission power or by using an effective error correcting code. Power consumption and limits for electromagnetic radiation are two of the main problems with handheld devices today and an efficient error correcting code will lower the transmission power and therefore also the power consumption of the device. </p><p>Duo-binary CTC is an improvement of the innovative turbo codes presented in 1996 by Berrou and Glavieux and is in use in many of today's standards for radio communication i.e. IEEE 802.16 (WiMAX) and DVB-RSC. This report describes the development of a duo-binary CTC decoder and the different problems that were encountered during the process. These problems include different design issues and algorithm choices during the design.</p><p>An implementation in VHDL has been written for Alteras Stratix II S90 FPGA and a reference-model has been made in Matlab. The model has been used to simulate bit error rates for different implementation alternatives and as bit-true reference for the hardware verification.</p><p>The final result is a duo-binary CTC decoder compatible with Alteras Stratix II designs and a reference model that can be used when simulating the decoder alone or the whole signal processing chain. Some of the features of the hardware are that block sizes, puncture rates and number of iterations are dynamically configured between each block Before synthesis it is possible to choose how many decoders that will work in parallel and how many bits the soft input will be represented in. The circuit has been run in 100 MHz in the lab and that gives a throughput around 50Mbit with four decoders working in parallel. This report describes the implementation, including its development, background and future possibilities.</p>
54

Hardware Accelerator for Duo-binary CTC decoding : Algorithm Selection, HW/SW Partitioning and FPGA Implementation

Bjärmark, Joakim, Strandberg, Marco January 2006 (has links)
Wireless communication is always struggling with errors in the transmission. The digital data received from the radio channel is often erroneous due to thermal noise and fading. The error rate can be lowered by using higher transmission power or by using an effective error correcting code. Power consumption and limits for electromagnetic radiation are two of the main problems with handheld devices today and an efficient error correcting code will lower the transmission power and therefore also the power consumption of the device. Duo-binary CTC is an improvement of the innovative turbo codes presented in 1996 by Berrou and Glavieux and is in use in many of today's standards for radio communication i.e. IEEE 802.16 (WiMAX) and DVB-RSC. This report describes the development of a duo-binary CTC decoder and the different problems that were encountered during the process. These problems include different design issues and algorithm choices during the design. An implementation in VHDL has been written for Alteras Stratix II S90 FPGA and a reference-model has been made in Matlab. The model has been used to simulate bit error rates for different implementation alternatives and as bit-true reference for the hardware verification. The final result is a duo-binary CTC decoder compatible with Alteras Stratix II designs and a reference model that can be used when simulating the decoder alone or the whole signal processing chain. Some of the features of the hardware are that block sizes, puncture rates and number of iterations are dynamically configured between each block Before synthesis it is possible to choose how many decoders that will work in parallel and how many bits the soft input will be represented in. The circuit has been run in 100 MHz in the lab and that gives a throughput around 50Mbit with four decoders working in parallel. This report describes the implementation, including its development, background and future possibilities.
55

LDPC Codes over Large Alphabets and Their Applications to Compressed Sensing and Flash Memory

Zhang, Fan 2010 August 1900 (has links)
This dissertation is mainly focused on the analysis, design and optimization of Low-density parity-check (LDPC) codes over channels with large alphabet sets and the applications on compressed sensing (CS) and flash memories. Compared to belief-propagation (BP) decoding, verification-based (VB) decoding has significantly lower complexity and near optimal performance when the channel alphabet set is large. We analyze the verification-based decoding of LDPC codes over the q-ary symmetric channel (q-SC) and propose the list-message-passing (LMP) decoding which off ers a good tradeoff between complexity and decoding threshold. We prove that LDPC codes with LMP decoding achieve the capacity of the q-SC when q and the block length go to infinity. CS is a newly emerging area which is closely related to coding theory and information theory. CS deals with the sparse signal recovery problem with small number of linear measurements. One big challenge in CS literature is to reduce the number of measurements required to reconstruct the sparse signal. In this dissertation, we show that LDPC codes with verification-based decoding can be applied to CS system with surprisingly good performance and low complexity. We also discuss modulation codes and error correcting codes (ECC’s) design for flash memories. We design asymptotically optimal modulation codes and discuss their improvement by using the idea from load-balancing theory. We also design LDPC codes over integer rings and fields with large alphabet sets for flash memories.
56

Capacity and Coding for 2D Channels

Khare, Aparna 2010 December 1900 (has links)
Consider a piece of information printed on paper and scanned in the form of an image. The printer, scanner, and the paper naturally form a communication channel, where the printer is equivalent to the sender, scanner is equivalent to the receiver, and the paper is the medium of communication. The channel created in this way is quite complicated and it maps 2D input patterns to 2D output patterns. Inter-symbol interference is introduced in the channel as a result of printing and scanning. During printing, ink from the neighboring pixels can spread out. The scanning process can introduce interference in the data obtained because of the finite size of each pixel and the fact that the scanner doesn't have infinite resolution. Other degradations in the process can be modeled as noise in the system. The scanner may also introduce some spherical aberration due to the lensing effect. Finally, when the image is scanned, it might not be aligned exactly below the scanner, which may lead to rotation and translation of the image. In this work, we present a coding scheme for the channel, and possible solutions for a few of the distortions stated above. Our solution consists of the structure, encoding and decoding scheme for the code, a scheme to undo the rotational distortion, and an equalization method. The motivation behind this is the question: What is the information capacity of paper. The purpose is to find out how much data can be printed out and retrieved successfully. Of course, this question has potential practical impact on the design of 2D bar codes, which is why encodability is a desired feature. There are also a number of other useful applications however. We could successfully decode 41.435 kB of data printed on a paper of size 6.7 X 6.7 inches using a Xerox Phasor 550 printer and a Canon CanoScan LiDE200 scanner. As described in the last chapter, the capacity of the paper using this channel is clearly greater than 0.9230 kB per square inch. The main contribution of the thesis lies in constructing the entire system and testing its performance. Since the focus is on encodable and practically implementable schemes, the proposed encoding method is compared with another well known and easily encodable code, namely the repeat accumulate code.
57

Propagation of updates to replicas using error correcting codes

Palaniappan, Karthik. January 2001 (has links)
Thesis (M.S.)--West Virginia University, 2001. / Title from document title page. Document formatted into pages; contains vi, 68 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 67-68).
58

Nested low-density lattice codes based on non-binary LDPC codes

Ghiya, Ankit 20 December 2010 (has links)
A family of low-density lattice codes (LDLC) is studied based on Construction-A for lattices. The family of Construction-A codes is already known to contain a large capacity-achieving subset. Parallels are drawn between coset non-binary low-density parity-check (LDPC) codes and nested low-density Construction-A lattices codes. Most of the related research in LDPC domain assumes optimal power allocation to encoded codeword. The source coding problem of mapping message to power optimal codeword for any LDPC code is in general, NP-hard. In this thesis, we present a novel method for encoding and decoding lattice based on non-binary LDPC codes using message-passing algorithms. / text
59

Design of Low-Floor Quasi-Cyclic IRA Codes and Their FPGA Decoders

Zhang, Yifei January 2007 (has links)
Low-density parity-check (LDPC) codes have been intensively studied in the past decade for their capacity-approaching performance. LDPC code implementation complexity and the error-rate floor are still two significant unsolved issues which prevent their application in some important communication systems. In this dissertation, we make efforts toward solving these two problems by introducing the design of a class of LDPC codes called structured irregular repeat-accumulate (S-IRA) codes. These S-IRA codes combine several advantages of other types of LDPC codes, including low encoder and decoder complexities, flexibility in design, and good performance on different channels. It is also demonstrated in this dissertation that the S-IRA codes are suitable for rate-compatible code family design and a multi-rate code family has been designed which may be implemented with a single encoder/decoder.The study of the error floor problem of LDPC codes is very difficult because simulating LDPC codes on a computer at very low error rates takes an unacceptably long time. To circumvent this difficulty, we implemented a universal quasi-cyclic LDPC decoder on a field programmable gate array (FPGA) platform. This hardware platform accelerates the simulations by more than 100 times as compared to software simulations. We implemented two types of decoders with partially parallel architectures on the FPGA: a circulant-based decoder and a protograph-based decoder. By focusing on the protograph-based decoder, different soft iterative decoding algorithms were implemented. It provides us with a platform for quickly evaluating and analyzing different quasi-cyclic LDPC codes, including the S-IRA codes. A universal decoder architecture is also proposed which is capable of decoding of an arbitrary LDPC code, quasi-cyclic or not. Finally, we studied the low-floor problem by focusing on one example S-IRA code. We identified the weaknesses of the code and proposed several techniques to lower the error floor. We successfully demonstrated in hardware that it is possible to lower the floor substantially by encoder and decoder modifications, but the best solution appeared to be an outer BCH code.
60

List decoding of error-correcting codes : winning thesis of the 2002 ACM doctoral dissertation competition /

Guruswami, Venkatesan, January 1900 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, 2001. / "Dissertation ... written under the supervision of Madhu Sudan and submitted to MIT in August 2001"--P. xi. Includes bibliographical references and index.

Page generated in 0.0653 seconds