• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 219
  • 51
  • 48
  • 18
  • 16
  • 15
  • 14
  • 12
  • 11
  • 7
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 485
  • 485
  • 163
  • 101
  • 79
  • 67
  • 66
  • 51
  • 47
  • 39
  • 38
  • 37
  • 36
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Decoding and Turbo Equalization for LDPC Codes Based on Nonlinear Programming

Iltis, Ronald A. 10 1900 (has links)
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California / Decoding and Turbo Equalization (TEQ) algorithms based on the Sum-Product Algorithm (SPA) are well established for LDPC codes. However there is increasing interest in linear and nonlinear programming (NLP)-based decoders which may offer computational and performance advantages over the SPA. We present NLP decoders and Turbo equalizers based on an Augmented Lagrangian formulation of the decoding problem. The decoders update estimates of both the Lagrange multipliers and transmitted codeword while solving an approximate quadratic programming problem. Simulation results show that the NLP decoder performance is intermediate between the SPA and bit-flipping algorithms. The NLP may thus be attractive in some applications as it eliminates the tanh/atanh computations in the SPA.
42

ENHANCING THE PCM/FM LINK - WITHOUT THE MATH

Fewer, Colm, Wilmot, Sinbad 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Since the 1970s PCM/FM has been the dominant modulation scheme used for RF telemetry. However more stringent spectrum availability as well as increasing data rates means that more advanced transmission methods are required to keep pace with industry demands. ARTM Tier-I and Tier-II are examples of how the PCM/FM link can be enhanced. However these techniques require a significant increase in the complexity of the receiver/detector for optimal recovery. This paper focuses on a quantitative approach to improving the rate and quality of data using existing PCM/FM links. In particular ACRA CONTROL and BAE SYSTEMS set themselves the goal of revisiting the pre-modulation filter, diversity combiner and bit-sync. By implementing programmable adaptive hardware, it was possible to explore the various tradeoffs offered by modifying pulse shapes and spectral occupancy, inclusion of forward error correction and smart source selection. This papers looks at the improvements achieved at each phase of the evaluation.
43

ERROR DETECTION AND CORRECTION -- AN EMPIRICAL METHOD FOR EVALUATING TECHNIQUES

Rymer, J. W. 10 1900 (has links)
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California / This paper describes a method for evaluating error correction techniques for applicability to the flight testing of aircraft. No statistical or math assumptions about the channel or sources of error are used. An empirical method is shown which allows direct “with and without” comparative evaluation of correction techniques. A method was developed to extract error sequences from actual test data independent of the source of the dropouts. Hardware was built to allow a stored error sequence to be repetitively applied to test data. Results are shown for error sequences extracted from a variety of actual test data. The effectiveness of Reed-Solomon (R-S) encoding and interleaving is shown. Test bed hardware configuration is described. Criteria are suggested for worthwhile correction techniques and suggestions are made for future investigation.
44

STANDARD INTEROPERABLE DATALINK SYSTEM, ENGINEERING DEVELOPMENT MODEL

Cirineo, Tony, Troublefield, Bob 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada / This paper describes an Engineering Development Model (EDM) for the Standard Interoperable Datalink System (SIDS). This EDM represents an attempt to design and build a programmable system that can be used to test and evaluate various aspects of a modern digital datalink. First, an investigation was started of commercial wireless components and standards that could be used to construct the SIDS datalink. This investigation lead to the construction of an engineering developmental model. This model presently consists of wire wrap and prototype circuits that implement many aspects of a modern digital datalink.
45

AN INTRODUCTION TO LOW-DENSITY PARITY-CHECK CODES

Moon, Todd K., Gunther, Jacob H. 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Low-Density Parity-Check (LDPC) codes are powerful codes capable of nearly achieving the Shannon channel capacity. This paper presents a tutorial introduction to LDPC codes, with a detailed description of the decoding algorithm. The algorithm propagates information about bit and check probabilities through a tree obtained from the Tanner graph for the code. This paper may be useful as a supplement in a course on error-control coding or digital communication.
46

EXTENDING THE RANGE OF PCM/FM USING A MULTISYMBOL DETECTOR AND TURBO CODING

Geoghegan, Mark 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / It has been shown that a multi-symbol detector can improve the detection efficiency of PCM/FM by 3 dB when compared to traditional methods without any change to the transmitted waveform. Although this is a significant breakthrough, further improvements are possible with the addition of Forward Error Correction (FEC). Systematic redundancy can be added by encoding the source data prior to the modulation process, thereby allowing channel errors to be corrected using a decoding circuit. Better detection efficiency translates into additional link margin that can be used to extend the operating range, support higher data throughput, or significantly improve the quality of the received data. This paper investigates the detection efficiency that can be achieved using a multisymbol detector and turbo product coding. The results show that this combination can improve the detection performance by nearly 9 dB relative to conventional PCM/FM systems. The increase in link margin is gained at the expense of a small increase in bandwidth and the additional complexity of the encoding and decoding circuitry.
47

A Systolic Array Based Reed-Solomon Decoder Realised Using Programmable Logic Devices

Biju, S., Narayana, T. V., Anguswamy, P., Singh, U. S. 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada / This paper describes the development of a Reed-Solomon (RS) Encoder-Decoder which implements the RS segment of the telemetry channel coding scheme recommended by the Consultative Committee on Space Data Systems (CCSDS)[1]. The Euclidean algorithm has been chosen for the decoder implementation, the hardware realization taking a systolic array approach. The fully pipelined decoder runs on a single clock and the operating speed is limited only by the Galois Field (GF) multiplier's delay. The circuit has been synthesised from VHDL descriptions and the hardware is being realised using programmable logic chips. This circuit was simulated for functional operation and found to perform correction of error patterns exactly as predicted by theory.
48

Multiplex Gene Synthesis and Error Correction from Microchips Oligonucleotides and High-throughput Gene Screening with Programmable Double Emulsion Microfluidics Droplets

Ma, Siying January 2015 (has links)
<p>Promising applications in the design of various biological systems hold critical implications as heralded in the rising field of synthetic biology. But, to achieve these goals, the ability to synthesize and screen in situ DNA constructs of any size or sequence rapidly, accurately and economically is crucial. Today, the process of DNA oligonucleotide synthesis has been automated but the overall development of gene and genome synthesis and error correction technology has far lagged behind that of gene and genome sequencing. What even lagged behind is the capability of screening a large population of information on a single cell, protein or gene level. Compartmentalization of single cells in water-in-oil emulsion droplets provides an opportunity to screen vast numbers of individual assays with quantitative readouts. However these single-emulsion droplets are incompatible with aqueous phase analysis and are not controllable through molecule transports. </p><p>This thesis presents the development of a multi-tool ensemble platform targeted at high-throughput gene synthesis, error correction and screening. An inkjet oligonucleotide synthesizer is constructed to synthesize oligonucleotides as sub-arrays onto patterned and functionalized thermoplastic microchips. The arrays are married to microfluidic wells that provide a chamber to for enzymatic amplification and assembly of the DNA from the microarrays into a larger construct. Harvested product is then amplified off-chip and error corrected using a mismatch endonuclease-based reaction. Bacterial cells baring individual synthetic gene variants are encapsulated as single cells into double-emulsion droplets where cell populations are enriched by up to 1000 times within several hours of proliferation. Permeation of Isopropyl-D-1-thiogalactopyranoside (IPTG) molecules from the external solution allows induction of target gene expression. The induced expression of the synthetic fluorescent proteins from at least ~100 bacteria per droplet generates clearly distinguishable fluorescent signals that enable droplets sorting through fluorescence-activated cell sorting (FACS) technique. The integration of oligo synthesis and gene assembly on the same microchip facilitates automation and miniaturization, which leads to cost reduction and increases in throughput. The capacity of double emulsion system (millions discrete compartments in 1ml solution) combined with high-throughput sorting by FACS provide the basis for screening complex gene libraries for different functionality and activity, significantly reducing the cost and turn-around time.</p> / Dissertation
49

High-Performance Decoder Architectures For Low-Density Parity-Check Codes

Zhang, Kai 09 January 2012 (has links)
The Low-Density Parity-Check (LDPC) codes, which were invented by Gallager back in 1960s, have attracted considerable attentions recently. Compared with other error correction codes, LDPC codes are well suited for wireless, optical, and magnetic recording systems due to their near- Shannon-limit error-correcting capacity, high intrinsic parallelism and high-throughput potentials. With these remarkable characteristics, LDPC codes have been adopted in several recent communication standards such as 802.11n (Wi-Fi), 802.16e (WiMax), 802.15.3c (WPAN), DVB-S2 and CMMB. This dissertation is devoted to exploring efficient VLSI architectures for high-performance LDPC decoders and LDPC-like detectors in sparse inter-symbol interference (ISI) channels. The performance of an LDPC decoder is mainly evaluated by area efficiency, error-correcting capability, throughput and rate flexibility. With this work we investigate tradeoffs between the four performance aspects and develop several decoder architectures to improve one or several performance aspects while maintaining acceptable values for other aspects. Firstly, we present a high-throughput decoder design for the Quasi-Cyclic (QC) LDPC codes. Two new techniques are proposed for the first time, including parallel layered decoding architecture (PLDA) and critical path splitting. Parallel layered decoding architecture enables parallel processing for all layers by establishing dedicated message passing paths among them. The decoder avoids crossbar-based large interconnect network. Critical path splitting technique is based on articulate adjustment of the starting point of each layer to maximize the time intervals between adjacent layers, such that the critical path delay can be split into pipeline stages. Furthermore, min-sum and loosely coupled algorithms are employed for area efficiency. As a case study, a rate-1/2 2304-bit irregular LDPC decoder is implemented using ASIC design in 90 nm CMOS process. The decoder can achieve an input throughput of 1.1 Gbps, that is, 3 or 4 times improvement over state-of-art LDPC decoders, while maintaining a comparable chip size of 2.9 mm^2. Secondly, we present a high-throughput decoder architecture for rate-compatible (RC) LDPC codes which supports arbitrary code rates between the rate of mother code and 1. While the original PLDA is lack of rate flexibility, the problem is solved gracefully by incorporating the puncturing scheme. Simulation results show that our selected puncturing scheme only introduces the BER performance degradation of less than 0.2dB, compared with the dedicated codes for different rates specified in the IEEE 802.16e (WiMax) standard. Subsequently, PLDA is employed for high throughput decoder design. As a case study, a RC- LDPC decoder based on the rate-1/2 WiMax LDPC code is implemented in CMOS 90 nm process. The decoder can achieve an input throughput of 975 Mbps and supports any rate between 1/2 and 1. Thirdly, we develop a low-complexity VLSI architecture and implementation for LDPC decoder used in China Multimedia Mobile Broadcasting (CMMB) systems. An area-efficient layered decoding architecture based on min-sum algorithm is incorporated in the design. A novel split-memory architecture is developed to efficiently handle the weight-2 submatrices that are rarely seen in conventional LDPC decoders. In addition, the check-node processing unit is highly optimized to minimize complexity and computing latency while facilitating a reconfigurable decoding core. Finally, we propose an LDPC-decoder-like channel detector for sparse ISI channels using belief propagation (BP). The BP-based detection computationally depends on the number of nonzero interferers only and are thus more suited for sparse ISI channels which are characterized by long delay but a small fraction of nonzero interferers. Layered decoding algorithm, which is popular in LDPC decoding, is also adopted in this paper. Simulation results show that the layered decoding doubles the convergence speed of the iterative belief propagation process. Exploring the special structure of the connections between the check nodes and the variable nodes on the factor graph, we propose an effective detector architecture for generic sparse ISI channels to facilitate the practical application of the proposed detection algorithm. The proposed architecture is also reconfigurable in order to switch flexible connections on the factor graph in the time-varying ISI channels.
50

Applying the "Split-ADC" Architecture to a 16 bit, 1 MS/s differential Successive Approximation Analog-to-Digital Converter

Chan, Ka Yan 30 April 2008 (has links)
Successive Approximation (SAR) analog-to-digital converters are used extensively in biomedical applications such as CAT scan due to the high resolution they offer. Capacitor mismatch in the SAR converter is a limiting factor for its accuracy and resolution. Without some form of calibration, a SAR converter can only achieve 10 bit accuracy. In industry, the CAL-DAC approach is a popular approach for calibrating the SAR ADC, but this approach requires significant test time. This thesis applies the“Split-ADC" architecture with a deterministic, digital, and background self-calibration algorithm to the SAR converter to minimize test time. In this approach, a single ADC is split into two independent halves. The two split ADCs convert the same input sample and produce two output codes. The ADC output is the average of these two output codes. The difference between these two codes is used as a calibration signal to estimate the errors of the calibration parameters in a modified Jacobi method. The estimates are used to update calibration parameters are updated in a negative feedback LMS procedure. The ADC is fully calibrated when the difference signal goes to zero on average. This thesis focuses on the specific implementation of the“Split-ADC" self-calibrating algorithm on a 16 bit, 1 MS/s differential SAR ADC. The ADC can be calibrated with 105 conversions. This represents an improvement of 3 orders of magnitude over existing statistically-based calibration algorithms. Simulation results show that the linearity of the calibrated ADC improves to within ±1 LSB.

Page generated in 0.0752 seconds