• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 782
  • 144
  • 113
  • 56
  • 26
  • 13
  • 13
  • 13
  • 13
  • 13
  • 13
  • 12
  • 10
  • 6
  • 6
  • Tagged with
  • 1665
  • 1665
  • 500
  • 350
  • 343
  • 312
  • 291
  • 281
  • 265
  • 221
  • 189
  • 166
  • 148
  • 143
  • 139
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Investigation into digital audio equaliser systems and the effects of arithmetic and transform errors on performance

Clark, Robin John January 2001 (has links)
Discrete-time audio equalisers introduce a variety of undesirable artefacts into audio mixing systems, namely, distortions caused by finite wordlength constraints, frequency response distortion due to coefficient calculation and signal disturbances that arise from real-time coefficient update. An understanding of these artefacts is important in the design of computationally affordable, good quality equalisers. A detailed investigation into these artefacts using various forms of arithmetic, filter frequency response, input excitation and sampling frequencies is described in this thesis. Novel coefficient calculation techniques, based on the matched z-transform (MZT) were developed to minimise filter response distortion and computation for on-line implementation. It was found that MZT-based filter responses can approximate more closely to s-plane filters, than BZTbased filters, with an affordable increase in computation load. Frequency response distortions and prewarping/correction schemes at higher sampling frequencies (96 and 192 kHz) were also assessed. An environment for emulating fractional quantisation in fixed and floating point arithmetic was developed. Various key filter topologies were emulated in fixed and floating point arithmetic using various input stimuli and frequency responses. The work provides detailed objective information and an understanding of the behaviour of key topologies in fixed and floating point arithmetic and the effects of input excitation and sampling frequency. Signal disturbance behaviour in key filter topologies during coefficient update was investigated through the implementation of various coefficient update scenarios. Input stimuli and specific frequency response changes that produce worst-case disturbances were identified, providing an analytical understanding of disturbance behaviour in various topologies. Existing parameter and coefficient interpolation algorithms were implemented and assessed under fihite wordlength arithmetic. The disturbance behaviour of various topologies at higher sampling frequencies was examined. The work contributes to the understanding of artefacts in audio equaliser implementation. The study of artefacts at the sampling frequencies of 48,96 and 192 kHz has implications in the assessment of equaliser performance at higher sampling frequencies.
312

Error control with constrained codes

04 February 2014 (has links)
M.Ing.(Electrical and Electronic Engineering) / In the ideal communication system no noise is present and no errors will be made. However, in practice, communication is over noisy channels which cause errors in the information. There is thus a necessity for the control of these errors. Furthermore, several channels impose runlength or disparity constraints on the bit stream. Until recently, the error control on these channels was applied separately to imposing the input restrictions with constrained codes. Such a system leads to poor performance under certain conditions. and is more complex and expensive to apply than systems where the error control is an integral part of the constrained code or decoder. In this study, we firstly investigate the error multiplication phenomena of constrained codes. An algorithm is presented that minimizes the error propagation probabilities of memoryless decoders according to two criteria. Another algorithm is presented along with the first to calculate the resulting bit error probabilities. The second approach to the error control of constrained codes is the construction of combined error-correcting constrained finite-state machine codes. We investigate the known construction techniques and construct several new codes using extensions of the known techniques. These codes complement or improve on the known error-correcting constrained codes with regards to either complexity, rate or error-correcting capability. Furthermore, these codes have good error behaviour and favourable power spectral densities.
313

Entanglement and quantum communication complexity.

07 December 2007 (has links)
Keywords: entanglement, complexity, entropy, measurement In chapter 1 the basic principles of communication complexity are introduced. Two-party communication is described explicitly, and multi-party communication complexity is described in terms of the two-party communication complexity model. The relation to entropy is described for the classical communication model. Important concepts from quantum mechanics are introduced. More advanced concepts, for example the generalized measurement, are then presented in detail. In chapter 2 the di erent measures of entanglement are described in detail, and concrete examples are provided. Measures for both pure states and mixed states are described in detail. Some results for the Schmidt decomposition are derived for applications in communication complexity. The Schmidt decomposition is fundamental in quantum communication and computation, and thus is presented in considerable detail. Important concepts such as positive maps and entanglement witnesses are discussed with examples. Finally, in chapter 3, the communication complexity model for quantum communication is described. A number of examples are presented to illustrate the advantages of quantum communication in the communication complexity scenario. This includes communication by teleportation, and dense coding using entanglement. A few problems, such as the Deutsch-Jozsa problem, are worked out in detail to illustrate the advantages of quantum communication. The communication complexity of sampling establishes some relationships between communication complexity, the Schmidt rank and entropy. The last topic is coherent communication complexity, which places communication complexity completely in the domain of quantum computation. An important lower bound for the coherent communication complexity in terms of the Schmidt rank is dervived. This result is the quantum analogue to the log rank lower bound in classical communication complexity. / Prof. W.H. Steeb
314

Coding structure and properties for correcting insertion/deletion errors

08 August 2012 (has links)
D. Ing. / The digital transmission of information necessitates the compensation for disturbances introduced by the channel. The compensation method usually used in digital communications is error correcting coding. The errors usually encountered are additive in nature, i.e. errors where only symbol values are changed. Understandably, the field of additive error correcting codes has become a mature research field. Remarkable progress has been made during the past 50 years, to such an extent that near Shannon capacity can be reached using suitable coding techniques. Sometimes the channel disturbances may result in the loss and/or gain of symbols and a subsequent loss of word or frame synchronisation. Unless some precautions were made, a synchronisation error may propagate and corrupt large blocks of data. Typical precautions taken against synchronisation errors are: out-of-band clock signals distributed to the transmission equipment in a network; stringent requirements on clock stability and jitter; limits on the number of repeaters and regeneration to curb jitter and delays; line coding to facilitate better clock extraction; and - use of framing methods on the coding level. Most transmission systems in use today will stop data transmission until reliable synchronisation is restored. El multiplexing systems are still the predominantly used technology in fixed telephone line operators and GSM operators, and recovering from a loss of synchronisation (the FAS alarm) typically lasts approximately 10 seconds. Considering that the transmission speed is 2048 KB/s, a large quantity of data is lost in during this process. The purpose of this study is therefore to broaden the understanding of insertion/deletion correcting binary codes. This will be achieved by presenting new properties and coding techniques for multiple insertion/deletion correcting codes. Mostly binary codes will be considered, but in some instances, the results may also hold for non-binary codes. As a secondary purpose, we hope to generate interest in this field of study and enable other researchers to continue to deeper explore the mechanisms of insertion and/or deletion correcting codes.
315

Bandwidth compression in a digital packet speech communication link

Aktekin, M. January 1980 (has links)
No description available.
316

Investigation of the use of infinite impulse response filters to construct linear block codes

Chandran, Aneesh January 2016 (has links)
A dissertation submitted in ful lment of the requirements for the degree of Masters in Science in the Information Engineering School of Electrical and Information Engineering August 2016 / The work presented extends and contributes to research in error-control coding and information theory. The work focuses on the construction of block codes using an IIR lter structure. Although previous works in this area uses FIR lter structures for error-detection, it was inherently used in conjunction with other error-control codes, there has not been an investigation into using IIR lter structures to create codewords, let alone to justify its validity. In the research presented, linear block codes are created using IIR lters, and the error-correcting capabilities are investigated. The construction of short codes that achieve the Griesmer bound are shown. The potential to construct long codes are discussed and how the construction is constrained due to high computational complexity is shown. The G-matrices for these codes are also obtained from a computer search, which is shown to not have a Quasi-Cyclic structure, and these codewords have been tested to show that they are not cyclic. Further analysis has shown that IIR lter structures implements truncated cyclic codes, which are shown to be implementable using an FIR lter. The research also shows that the codewords created from IIR lter structures are valid by decoding using an existing iterative soft-decision decoder. This represents a unique and valuable contribution to the eld of error-control coding and information theory. / MT2017
317

A system on chip based error detection and correction implementation for nanosatellites

Hillier, Caleb Pedro January 2018 (has links)
Thesis (Master of Engineering in Electrical Engineering)--Cape Peninsula University of Technology, 2018. / This thesis will focus on preventing and overcoming the effects of radiation in RAM on board the ZA cube 2 nanosatellite. The main objective is to design, implement and test an effective error detection and correction (EDAC) system for nanosatellite applications using a SoC development board. By conducting an in-depth literature review, all aspects of single-event effects are investigated, from space radiation right up to the implementation of an EDAC system. During this study, Hamming code was identified as a suitable EDAC scheme for the prevention of single-event effects. During the course of this thesis, a detailed radiation study of ZA cube 2’s space environment is conducted. This provides insight into the environment to which the satellite will be exposed to during orbit. It also provides insight which will allow accurate testing should accelerator tests with protons and heavy ions be necessary. In order to understand space radiation, a radiation study using ZA cube 2’s orbital parameters was conducted using OMERE and TRIM software. This study included earth’s radiation belts, galactic cosmic radiation, solar particle events and shielding. The results confirm that there is a need for mitigation techniques that are capable of EDAC. A detailed look at different EDAC schemes, together with a code comparison study was conducted. There are two types of error correction codes, namely error detection codes and error correction codes. For protection against radiation, nanosatellites use error correction codes like Hamming, Hadamard, Repetition, Four Dimensional Parity, Golay, BCH and Reed Solomon codes. Using detection capabilities, correction capabilities, code rate and bit overhead each EDAC scheme is evaluated and compared. This study provides the reader with a good understanding of all common EDAC schemes. The field of nanosatellites is constantly evolving and growing at a very fast speed. This creates a growing demand for more advanced and reliable EDAC systems that are capable of protecting all memory aspects of satellites. Hamming codes are extensively studied and implemented using different approaches, languages and software. After testing three variations of Hamming codes, in both Matlab and VHDL, the final and most effective version was Hamming [16, 11, 4]2. This code guarantees single error correction and double error detection. All developed Hamming codes are suited for FPGA implementation, for which they are tested thoroughly using simulation software and optimised.
318

Speech encoding for low data rate transmission

Al-Doubooni, Maythem M. Z. January 1981 (has links)
This work is concerned with encoding shape descriptors for a succession of the waveform segments to enable the transmission of speech signals at a low data rate. The segmentation was dependent on the identification of waveform features in speech signals thereby producing an irregular data rate from the time encoding process. The shape descriptors have been related to the real and complex zeros of a waveform through the theory of zero-based signal representation. A study of factors governing the data rates, the speech intelligibility and the buffer delay has been made for the above coding process based on waveform segmentation at zero-crossings. The redundancy in the average information conveyed by the zero-crossing data was investigated from conditional probability measurements resulting in the conclusion that a significant reduction in the data was available from coding procedures utilising the correlation in the data sequence. Signal pre-emphasis and dynamic range were found to control the segmentation rate, the variations in segmentation rate during an utterance determining the buffer size and delay. The transmission rate and the system delay necessary for time encoding were strongly influenced by the distortion arising from buffer management in matching the variable information rate to a constant transmission rate. A reduction by approximately a third in the transmission rate was observed to introduce data underflow distortion at a 200ms system delay setting into approximately 5% of the speech. Finally, a performance assessment of the time encoding process was made, subjectively by a reduced form of Diagnostic Rhyme Test (DRT) and objectively by spectral density plots comparisons. The results have indicated a data rate less than that for delta modulation and a processing complexity less than that for vocoders.
319

Non linear frequency compression, with particular reference to helium speech

Al-Sulaifanie, Bayez K. January 1984 (has links)
Helium speech is a term used to denote the speech produced by a deep sea diver breathing a Helium Oxygen mixture. The replacement of nitrogen in normal air by Helium solves some of the physiological problems associated with diving under pressure. However, it introduces severe distortion in diver's speech. The principal distortion is the nonlinear frequency expansion in the formant frequencies. A real time enhancement system has been constructed and partially tested. The design specification for this unscrambler has been generalised to enable the system to correct most of the Helium speech distortions. The system operates in the frequency domain and is based on the wide band analysis-synthesis technique. The system's algorithm for correcting the Helium speech distortion, is flexible and could be easily changed to satisfy different diving conditions. The possible use of the system to study Helium speech characteristics has also been considered.
320

A functional multiprocessor system for real-time digital signal processing

Sulley, C. E. January 1985 (has links)
This thesis is concerned primarily with the architecture of Digital Signal Computers. The work is supported by the design, development and application of a novel Digital Signal Computer system, the MAC68. The MAC68 is a Functional Multiprocessor, using two independent processors, one of which executes general-purpose tasks, and the other executes sequences of arithmetic. The particular MAC68 design was arrived at after careful evaluation of existing Digital Signal Computer architectures. MAC68 features are fully evaluated via its application to the Sub-Band Coding of speech, and in particular by the development of a 16Kb/s Sub-band Coder using six sub-bands. MAC68 performance was found to be comparable to that of current DSP micros for basic digital filter tasks, and superior for FFT tasks. The MAC68 architecture is a balance of high-speed arithmetic and general- purpose capabilities, and is likely to have a greater range of application than General-Purpose micros or DSP micros used alone. Suggestions are put forward for MAC68 enhancements utilising state-of-the-art hardware and software technologies. Because of the current widespread use of General-Purpose micros, and because of the possible performance gains to be had with the MAC68-type architecture, it is thought that MAC68 architectural concepts will be of value in the design of future high-performance Digital Signal Computer systems.

Page generated in 0.1134 seconds