• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 59
  • 11
  • 8
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 98
  • 65
  • 47
  • 45
  • 38
  • 35
  • 31
  • 23
  • 22
  • 20
  • 18
  • 18
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Codec for multimedia services using wavelets and fractals.

Brijmohan, Yarish. January 2004 (has links)
Increase in technological advancements in fields of telecommunications, computers and television have prompted the need to exchange video, image and audio files between people. Transmission of such files finds numerous multimedia applications such as, internet multimedia, video conferencing, videophone, etc. However, the transmission and rece-ption of these files are limited by the available bandwidth as well as storage capacities of systems. Thus there is a need to develop compression systems, such that required multimedia applications can operate within these limited capacities. This dissertation presents two well established coding approaches that are used in modern' image and video compression systems. These are the wavelet and fractal methods. The wavelet based coder, which adopts the transform coding paradigm, performs the discrete wavelet transform on an image before any compression algorithms are implemented. The wavelet transform provides good energy compaction and decorrelating properties that make it suited for compression. Fractal compression systems on the other hand differ from the traditional transform coders. These algorithms are based on the theory of iterated function systems and take advantage of local self-similarities present in images. In this dissertation, we first review the theoretical foundations of both wavelet and fractal coders. Thereafter we evaluate different wavelet and fractal based compression algorithms, and assess the strengths and weakness in each case. Due to the short-comings of fractal based compression schemes, such as the tiling effect appearing in reconstructed images, a wavelet based analysis of fractal image compression is presented. This is the link that produces fractal coding in the wavelet domain, and presents a hybrid coding scheme called fractal-wavelet coders. We show that by using smooth wavelet basis in computing the wavelet transform, the tiling effect of fractal systems can be removed. The few wavelet-fractal coders that have been proposed in literature are discussed, showing advantages over the traditional fractal coders. This dissertation will present a new low-bit rate video compression system that is based on fractal coding in the wavelet domain. This coder makes use of the advantages of both the wavelet and fractal coders discussed in their review. The self-similarity property of fractal coders exploits the high spatial and temporal correlation between video frames. Thus the fractal coding step gives an approximate representation of the coded frame, while the wavelet technique adds detail to the frame. In this proposed scheme, each frame is decomposed using the pyramidal multi-resolution wavelet transform. Thereafter a motion detection operation is used in which the subtrees are partitioned into motion and non-motion subtrees. The nonmotion subtrees are easily coded by a binary decision, whereas the moving ones are coded using the combination of the wavelet SPIHT and fractal variable subtree size coding scheme. All intra-frame compression is performed using the SPIHT compression algorithm and inter-frame using the fractal-wavelet method described above. The proposed coder is then compared to current low bit-rate video coding standards such as the H.263+ and MPEG-4 coders through analysis and simulations. Results show that the proposed coder is competitive with the current standards, with a performance improvement been shown in video sequences that do not posses large global motion. Finally, a real-time implementation of the proposed algorithm is performed on a digital signal processor. This illustrates the suitability of the proposed coder being applied to numerous multimedia applications. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2004.
62

Soft-decision decoding of Reed-Solomon codes for mobile messaging systems

Kosmach, James J. 12 1900 (has links)
No description available.
63

Non-binary compound codes based on single parity-check codes.

Ghayoor, Farzad. January 2013 (has links)
Shannon showed that the codes with random-like codeword weight distribution are capable of approaching the channel capacity. However, the random-like property can be achieved only in codes with long-length codewords. On the other hand, the decoding complexity for a random-like codeword increases exponentially with its length. Therefore, code designers are combining shorter and simpler codes in a pseudorandom manner to form longer and more powerful codewords. In this research, a method for designing non-binary compound codes with moderate to high coding rate is proposed. Based on this method, non-binary single parity-check (SPC) codes are considered as component codes and different iterative decoding algorithms for decoding the constructed compound codes are proposed. The soft-input soft-output component decoders, which are employed for the iterative decoding algorithms, are constructed from optimal and sub-optimal a posteriori probability (APP) decoders. However, for non-binary codes, implementing an optimal APP decoder requires a large amount of memory. In order to reduce the memory requirement of the APP decoding algorithm, in the first part of this research, a modified form of the APP decoding algorithm is presented. The amount of memory requirement of this proposed algorithm is significantly less than that of the standard APP decoder. Therefore, the proposed algorithm becomes more practical for decoding non-binary block codes. The compound codes that are proposed in this research are constructed from combination of non-binary SPC codes. Therefore, as part of this research, the construction and decoding of the non-binary SPC codes, when SPC codes are defined over a finite ring of order q, are presented. The concept of finite rings is more general and it thus includes non-binary SPC codes defined over finite fields. Thereafter, based on production of non-binary SPC codes, a class of non-binary compound codes is proposed that is efficient for controlling both random-error and burst-error patterns and can be used for applications where high coding rate schemes are required. Simulation results show that the performance of the proposed codes is good. Furthermore, the performance of the compound code improves over larger rings. The analytical performance bounds and the minimum distance properties of these product codes are studied. / Thesis (Ph.D.)-University of KwaZulu-Natal, Durban, 2013.
64

cROVER: Context-augmented Speech Recognizer based on Multi-Decoders' Output

Abida, Mohamed Kacem 20 September 2011 (has links)
The growing need for designing and implementing reliable voice-based human-machine interfaces has inspired intensive research work in the field of voice-enabled systems, and greater robustness and reliability are being sought for those systems. Speech recognition has become ubiquitous. Automated call centers, smart phones, dictation and transcription software are among the many systems currently being designed and involving speech recognition. The need for highly accurate and optimized recognizers has never been more crucial. The research community is very actively involved in developing powerful techniques to combine the existing feature extraction methods for a better and more reliable information capture from the analog signal, as well as enhancing the language and acoustic modeling procedures to better adapt for unseen or distorted speech signal patterns. Most researchers agree that one of the most promising approaches for the problem of reducing the Word Error Rate (WER) in large vocabulary speech transcription, is to combine two or more speech recognizers and then generate a new output, in the expectation that it provides a lower error rate. The research work proposed here aims at enhancing and boosting even further the performance of the well-known Recognizer Output Voting Error Reduction (ROVER) combination technique. This is done through its integration with an error filtering approach. The proposed system is referred to as cROVER, for context-augmented ROVER. The principal idea is to flag erroneous words following the combination of the word transition networks through a scanning process at each slot of the resulting network. This step aims at eliminating some transcription errors and thus facilitating the voting process within ROVER. The error detection technique consists of spotting semantic outliers in a given decoder's transcription output. Due to the fact that most error detection techniques suffer from a high false positive rate, we propose to combine the error filtering techniques to compensate for the poor performance of each of the individual error classifiers. Experimental results, have shown that the proposed cROVER approach is able to reduce the relative WER by almost 10% through adequate combination of speech decoders. The approaches proposed here are generic enough to be used by any number of speech decoders and with any type of error filtering technique. A novel voting mechanism has also been proposed. The new confidence-based voting scheme has been inspired from the cROVER approach. The main idea consists of using the confidence scores collected from the contextual analysis, during the scoring of each word in the transition network. The new voting scheme outperformed ROVER's original voting, by up to 16% in terms of relative WER reduction.
65

Fixed-analysis adaptive-synthesis filter banks

Lettsome, Clyde Alphonso 07 April 2009 (has links)
Subband/Wavelet filter analysis-synthesis filters are a major component in many compression algorithms. Such compression algorithms have been applied to images, voice, and video. These algorithms have achieved high performance. Typically, the configuration for such compression algorithms involves a bank of analysis filters whose coefficients have been designed in advance to enable high quality reconstruction. The analysis system is then followed by subband quantization and decoding on the synthesis side. Decoding is performed using a corresponding set of synthesis filters and the subbands are merged together. For many years, there has been interest in improving the analysis-synthesis filters in order to achieve better coding quality. Adaptive filter banks have been explored by a number of authors where by the analysis filters and synthesis filters coefficients are changed dynamically in response to the input. A degree of performance improvement has been reported but this approach does require that the analysis system dynamically maintain synchronization with the synthesis system in order to perform reconstruction. In this thesis, we explore a variant of the adaptive filter bank idea. We will refer to this approach as fixed-analysis adaptive-synthesis filter banks. Unlike the adaptive filter banks proposed previously, there is no analysis synthesis synchronization issue involved. This implies less coder complexity and more coder flexibility. Such an approach can be compatible with existing subband wavelet encoders. The design methodology and a performance analysis are presented.
66

Viterbi decoders for mobile and satellite communications /

Abdul Shakoor, Abdul Rafeeq, January 1900 (has links)
Thesis (M. App. Sc.)--Carleton University, 2004. / Includes bibliographical references (p. 75-79). Also available in electronic format on the Internet.
67

Decoding algorithms for binary BCH and Reed-Solomon codes

Swaminathan, Jayashree. January 1995 (has links)
Thesis (M.S.)--Ohio University, August, 1995. / Title from PDF t.p.
68

Implementation of a forward error correction technique using convolutional encoding with Viterbi decoding /

Rawat, Sachin. January 2004 (has links)
Thesis (M.S.)--Ohio University, March, 2004. / Includes bibliographical references (p. 90-91).
69

Iterative co-channel interference suppression /

Gu, Chaowen, January 1900 (has links)
Thesis (M.App.Sc.) - Carleton University, 2007. / Includes bibliographical references (p. 109-110). Also available in electronic format on the Internet.
70

Implementation of a forward error correction technique using convolutional encoding with Viterbi decoding

Rawat, Sachin. January 2004 (has links)
Thesis (M.S.)--Ohio University, March, 2004. / Title from PDF t.p. Includes bibliographical references (p. 90-91)

Page generated in 0.0475 seconds