• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 351
  • 65
  • 51
  • 33
  • 17
  • 17
  • 17
  • 17
  • 17
  • 17
  • 8
  • 5
  • 3
  • 3
  • 1
  • Tagged with
  • 846
  • 846
  • 406
  • 279
  • 267
  • 137
  • 134
  • 130
  • 111
  • 107
  • 104
  • 96
  • 87
  • 86
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Pitch filtering in adaptive predictive coding of speech

Ramachandran, Ravi P. January 1986 (has links)
No description available.
472

Motion-compensated predictive coding of image sequences : analysis and evaluation

O'Shaughnessy, Richard. January 1985 (has links)
No description available.
473

Burst and compound error correction with cyclic codes.

Lewis, David John Head January 1971 (has links)
No description available.
474

Software implementation of Viterbi maximum-likelihood decoding

Almonte, Caonabo January 1981 (has links)
No description available.
475

On the nonexistence of perfect e-codes and tight 2e-designs in Hamming schemes H(n,q) with e > 3 and q > 3 /

Hong, Yiming, January 1984 (has links)
No description available.
476

Vector quantization applied to speech coding in the wireless environment

Morgenstern, Robert M. 29 July 2009 (has links)
This thesis describes the development of the Voice Coding Development and Research (VoCoDeR) System, a software tool for testing and development of new speech coding methods. This tool enables a researcher to build a voice encoder and speech filters, determine an optimal bit allocation scheme, and/or create an error correction scheme, as desired. Using a channel simulation tool such as BERSIM, the user can create bit error patterns to corrupt the data and then decode the speech for playback and analysis. The system is based upon the North American Digital Cellular (NADC) 8kbps Vector-Sum Excited Linear Prediction (VSELP) speech coder and is currently capable of simulating the complete IS-54 source and channel coding scheme. The system is tested using Multi-Stage Vector Quantization and Finite-State Vector Quantization (FSVQ) applied to the linear prediction coefficients. FSVQ provides significant bit rate savings over previous methods of quantization. A variety of coefficient representations are compared including log-area ratios, arcsine reflection coefficients, line spectrum pairs and immittance spectrum pairs. This has allowed the recently introduced immittance spectrum pairs to be tested using vector quantization. Multiple distortion measures are also examined. The VoCoDeR System provides a tool that will allow an engineer to work on new speech coding algorithms or to determine an optimal source and channel coding scheme. / Master of Science
477

A Survey of the Hadamard Conjecture

Tressler, Eric Paul 10 May 2004 (has links)
Hadamard matrices are defined, and their basic properties outlined. A survey of historical and recent literature follows, in which a number of existence theorems are examined and given context. Finally, a new result for Hadamard matrices over $\Z_2$ is presented and given a graph-theoretic interpretation. / Master of Science
478

Generating Machine Code for High-Level Programming Languages

Chao, Chia-Huei 12 1900 (has links)
The purpose of this research was to investigate the generation of machine code from high-level programming language. The following steps were undertaken: 1) Choose a high-level programming language as the source language and a computer as the target computer. 2) Examine all stages during the compiling of a high-level programming language and all data sets involved in the compilation. 3) Discover the mechanism for generating machine code and the mechanism to generate more efficient machine code from the language. 3) Construct an algorithm for generating machine code for the target computer. The results suggest that compiler is best implemented in a high-level programming language, and that SCANNER and PARSER should be independent of target representations, if possible.
479

ON THE PROPERTIES AND COMPLEXITY OF MULTICOVERING RADII

Mertz, Andrew Eugene 01 January 2005 (has links)
People rely on the ability to transmit information over channels of communication that aresubject to noise and interference. This makes the ability to detect and recover from errorsextremely important. Coding theory addresses this need for reliability. A fundamentalquestion of coding theory is whether and how we can correct the errors in a message thathas been subjected to interference. One answer comes from structures known as errorcorrecting codes.A well studied parameter associated with a code is its covering radius. The coveringradius of a code is the smallest radius such that every vector in the Hamming space of thecode is contained in a ball of that radius centered around some codeword. Covering radiusrelates to an important decoding strategy known as nearest neighbor decoding.The multicovering radius is a generalization of the covering radius that was proposed byKlapper [11] in the course of studying stream ciphers. In this work we develop techniques forfinding the multicovering radius of specific codes. In particular, we study the even weightcode, the 2-error correcting BCH code, and linear codes with covering radius one.We also study questions involving the complexity of finding the multicovering radius ofcodes. We show: Lower bounding the m-covering radius of an arbitrary binary code is NPcompletewhen m is polynomial in the length of the code. Lower bounding the m-coveringradius of a linear code is Σp2-complete when m is polynomial in the length of the code. IfP is not equal to NP, then the m-covering radius of an arbitrary binary code cannot beapproximated within a constant factor or within a factor nϵ, where n is the length of thecode and ϵ andlt; 1, in polynomial time. Note that the case when m = 1 was also previouslyunknown. If NP is not equal to Σp2, then the m-covering radius of a linear code cannot beapproximated within a constant factor or within a factor nϵ, where n is the length of thecode and ϵ andlt; 1, in polynomial time.
480

Characterization and Coding Techniques for Long-Haul Optical Telecommunication Systems

Ivkovic, Milos January 2007 (has links)
This dissertation is a study of error in long haul optical fiber systems and how to coupe with them. First we characterize error events occurring during transmission, then we determine lower bounds on information capacity (achievable information rates) and at the end we propose coding schemes for these systems.Existing approaches for obtaining probability density functions (PDFs) for pulse energy in long-haul optical fiber transmission systems rely on numerical simulations or analytical approximations. Numerical simulations make far tails of the PDFs difficult to obtain, while existing analytic approximations are often inaccurate, as they neglect nonlinear interaction between pulses and noise.Our approach combines the instanton method from statistical mechanics to model far tails of the PDFs, with numerical simulations to refine the middle part of the PDFs. We combine the two methods by using an orthogonal polynomial expansion constructed specifically for this problem. We demonstrate the approach on an example of a specific submarine transmission system.Once the channel is characterized estimating achievable information rates is done by a modification of a method originally proposed by Arnold and Pfitser. We give numerical results for the same optical transmission system (submarine system at transmission rate 40Gb/s).The achievable information rate varies with noise and length of the bit patterns considered (among other parameters). We report achievable numerical rates for systems with different noise levels, propagation distances and length of the bit patterns considered.We also propose two iterative decoding schemes suitable for high-speed long-haul optical transmission. One scheme is a modification of a method, originally proposed in the context of magnetic media, which incorporates the BCJR algorithm (to overcomeintersymbol interference) and Low-Density Parity-Check (LDPC) codes for additional error resilience. This is a ``soft decision scheme" -meaning that the decoding algorithm operates with probabilities(instead of binary values). The second scheme is ``hard decision" -it operates with binary values. This scheme is based on the maximum likelihood sequence detection-Viterbi algorithm and a hard decision"Gallager B" decoding algorithm for LDPC codes.

Page generated in 0.0763 seconds