• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 59
  • 11
  • 8
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 98
  • 65
  • 47
  • 45
  • 38
  • 35
  • 31
  • 23
  • 22
  • 20
  • 18
  • 18
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

VLSI implementation of low-error-floor multi-rate capacity-approaching low-density parity-check code decoder /

Yang, Lei, January 2006 (has links)
Thesis (Ph. D.)--University of Washington, 2006. / Vita. Includes bibliographical references (leaves 99-103).
42

Transmission distortion modeling for wireless video communication

Dani, Janak January 2005 (has links)
Thesis (M.S.)--University of Missouri-Columbia, 2005. / The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (January 22, 2007) Includes bibliographical references.
43

Interleaver design for modified circular simplex turbo block coded modulator

Ganti, Kamalakar. January 2004 (has links)
Thesis (M.S.)--Ohio University, November, 2004. / Title from PDF t.p. Includes bibliographical references (p. 54-56)
44

Modified VLSI designs for error correction codes /

Chen, Lupin. January 1900 (has links)
Thesis (M.S.)--Oregon State University, 2008. / Printout. Includes bibliographical references (leaves 53-56). Also available on the World Wide Web.
45

Viterbi decoding of ternary line codes

Ouahada, Khmaies 26 February 2009 (has links)
M.Ing.
46

Complexity and Power Consumption in Stochastic Iterative Decoders

Payak, Keyur M. 01 December 2010 (has links)
Stochastic iterative decoding is a novel method to decode the bits received at the end of a communication channel and to control the rate of error happening in the message bits due to noise being injected into the channel. This decoder uses stochastic computation that is based on manipulation of probabilities from a random sequence of digital bits. Hardware needed for implementing this arithmetic is very simple and can be completely implemented using simple digital complementary metal oxide gates. This helps the decoder to be technology independent, which is a major advantage over its digital and analog counterparts, which are complex and technology dependent. But this decoder presents a new set of problems when nodes in stochastic decoders can get locked to a fixed state if the stochastic streams are correlated due to the presence of cycles in a decoder's factor graph. To overcome this problem, additional logic has to be introduced on every edge of the decoder to break this correlation. This work presents application-specific-integrated-circuit (ASIC) design and simulation of the digital core of a stochastic iterative decoder in 0.18um technology (Spec- tre). This thesis also examines gate complexity and power onsumption of the decoder with edge-memory, tracking forecast memory, and dual-counter hysteresis techniques in place.
47

Episode 6.08 – Binary Decoders

Tarnoff, David 01 January 2020 (has links)
What does it take to switch on a device? In some cases, like getting a soda from a vending machine, a number of conditions must be just right. That’s where binary decoders come in.
48

Energy-efficient custom integrated circuit design of universal decoders using noise-centric GRAND algorithms

Riaz, Arslan 24 May 2024 (has links)
Whenever data is stored or transmitted, it inevitably encounters noise that can lead to harmful corruption. The communication technologies rely on decoding the data using Error Correcting Codes (ECC) that enable the rectification of noise to retrieve the original message. Maximum Likelihood (ML) decoding has proven to be optimally accurate, but it has not been adopted due to the lack of a feasible implementation arising from its computational complexity. It has been established that ML decoding of arbitrary linear codes is a Nondeterministic Polynomial-time (NP) hard problem. As a result, many code-specific decoders have been developed as an approximation of an ML decoder. This code-centric decoding approach leads to a hardware implementation that tightly couples with a specific code structure. Recently proposed Guessing Random Additive Noise Decoding (GRAND) offers a solution by establishing a noise-centric decoding approach, thereby making it a universal ML decoder. Both the soft-detection and hard-detection variants of GRAND have shown to be capacity achieving for any moderate redundancy arbitrary code. This thesis claims that GRAND can be efficiently implemented in hardware with low complexity while offering significantly higher energy efficiency than state-of-the-art code-centric decoders. In addition to being hardware-friendly, GRAND offers high parallelizability that can be chosen according to the throughput requirement making it flexible for a wide range of applications. To support this claim, this thesis presents custom-designed energy-efficient integrated circuits and hardware architectures for the family of GRAND algorithms. The universality of the algorithm is demonstrated through measurements across various codebooks for different channel conditions. Furthermore, we employ the noise recycling technique in both hard-detection and soft-detection scenarios to improve the decoding by exploiting the temporal noise correlations. Using the fabricated chips, we demonstrate that employing noise recycling with GRAND significantly reduces energy and latency, while providing additional gains in decoding performance. Efficient integrated architectures of GRAND will significantly reduce the hardware complexity while future-proofing a device so that it can decode any forthcoming code. The noise-centric decoding approach overcomes the need for code standardization making it adaptable for a wide range of applications. A single GRAND chip can replace all existing decoders, offering competitive decoding performance while also providing significantly higher energy and area efficiency. / 2026-05-23T00:00:00Z
49

Development of system for teaching turbo code forward error correction techniques

Shi, Shuai January 2007 (has links)
Thesis (M.Tech.: Electronic Engineering)-Dept. of Electronic Engineering, Durban University of Technology, 2007. 1 v. (various pagings) / The objective was to develop a turbo code demonstration system for educational use. The aim was to build a system that would execute rapidly and produce a graphical display exemplifying the power of turbo codes and showing the effects of parameter variation.
50

Contrôle des performances et conciliation d’erreurs dans les décodeurs d’image / Performance monitoring and errors reconciliation in image decoders

Takam tchendjou, Ghislain 12 December 2018 (has links)
Cette thèse porte sur le développement et l’implémentation des algorithmes de détection et de correction des erreurs dans les images, en vue de contrôler la qualité des images produites en sortie des décodeurs numériques. Pour atteindre les objectifs visés dans cette étude, nous avons commencé par faire l’état de lieu de l’existant. L’examen critique des approches en usage a justifié la construction d’un ensemble de méthodes objectives d’évaluation de la qualité visuelle des images, basées sur des méthodes d’apprentissage automatique. Ces algorithmes prennent en entrées un ensemble de caractéristiques ou de métriques extraites des images. En fonction de ces caractéristiques, et de la disponibilité ou non d’une image de référence, deux sortes de mesures objectives ont été élaborées : la première basée sur des métriques avec référence, et la seconde basée sur des métriques sans référence ; toutes les deux à distorsions non spécifiques. En plus de ces méthodes d’évaluation objective, une méthode d’évaluation et d’amélioration de la qualité des images basée sur la détection et la correction des pixels défectueux dans les images a été mise en œuvre. Les applications ont contribué à affiner aussi bien les méthodes d’évaluation de la qualité visuelle des images que la construction des algorithmes objectifs de détection et de correction des pixels défectueux par rapport aux diverses méthodes actuellement en usage. Une implémentation sur cartes FPGA des techniques développées a été réalisée pour intégrer les modèles présentant les meilleures performances dans de la phase de simulation. / This thesis deals with the development and implementation of error detection and correction algorithms in images, in order to control the quality of produced images at the output of digital decoders. To achieve the objectives of this work, we first study the state-of the-art of the existing approaches. Examination of classically used approaches justified the study of a set of objective methods for evaluating the visual quality of images, based on machine learning methods. These algorithms take as inputs a set of characteristics or metrics extracted from the images. Depending on the characteristics extracted from the images, and the availability or not of a reference image, two kinds of objective evaluation methods have been developed: the first based on full reference metrics, and the second based on no-reference metrics; both of them with non-specific distortions. In addition to these objective evaluation methods, a method of evaluating and improving the quality of the images based on the detection and correction of the defective pixels in the images has been implemented. The proposed results have contributed to refining visual image quality assessment methods as well as the construction of objective algorithms for detecting and correcting defective pixels compared to the various currently used methods. An implementation on an FPGA has been carried out to integrate the models with the best performances during the simulation phase.

Page generated in 0.1558 seconds