• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 10
  • 8
  • 7
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 74
  • 74
  • 18
  • 16
  • 14
  • 14
  • 12
  • 12
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Multiple Description Lattice Vector Quantization

Huang, Xiang 06 1900 (has links)
<p> This thesis studies the multiple description vector quantization with lattice codebooks (MDLVQ).</p> <p> The design of index assignment is crucial to the performance of MDLVQ. However, to our best knowledge, none of previous index assignment algorithms for MDLVQ is optimal. In this thesis, we propose a simple linear-time index assignment algorithm for MDLVQ with any K ≥ 2 balanced descriptions. We prove, under the assumption of high resolution, that the algorithm is optimal for K = 2. The optimality holds for many commonly used good lattices of any dimensions, over the entire range of achievable central distortions given the side entropy rate. The optimality is in terms of minimizing the expected distortion given the side description loss rate and given the side entropy rate. We conjecture it to be optimal for K > 2 in general.</p> <p> We also made progress in the analysis of MDLVQ performance. The first exact closed form expression of the expected distortion was derived for K = 2. For K > 2, we improved the current asymptotic expression of the expected distortion.</p> / Thesis / Master of Applied Science (MASc)
22

Target tracking using residual vector quantization

Aslam, Salman Muhammad 18 November 2011 (has links)
In this work, our goal is to track visual targets using residual vector quantization (RVQ). We compare our results with principal components analysis (PCA) and tree structured vector quantization (TSVQ) based tracking. This work is significant since PCA is commonly used in the Pattern Recognition, Machine Learning and Computer Vision communities. On the other hand, TSVQ is commonly used in the Signal Processing and data compression communities. RVQ with more than two stages has not received much attention due to the difficulty in producing stable designs. In this work, we bring together these different approaches into an integrated tracking framework and show that RVQ tracking performs best according to multiple criteria on publicly available datasets. Moreover, an advantage of our approach is a learning-based tracker that builds the target model while it tracks, thus avoiding the costly step of building target models prior to tracking.
23

On error-robust source coding with image coding applications

Andersson, Tomas January 2006 (has links)
<p>This thesis treats the problem of source coding in situations where the encoded data is subject to errors. The typical scenario is a communication system, where source data such as speech or images should be transmitted from one point to another. A problem is that most communication systems introduce some sort of error in the transmission. A wireless communication link is prone to introduce individual bit errors, while in a packet based network, such as the Internet, packet losses are the main source of error.</p><p>The traditional approach to this problem is to add error correcting codes on top of the encoded source data, or to employ some scheme for retransmission of lost or corrupted data. The source coding problem is then treated under the assumption that all data that is transmitted from the source encoder reaches the source decoder on the receiving end without any errors. This thesis takes another approach to the problem and treats source and channel coding jointly under the assumption that there is some knowledge about the channel that will be used for transmission. Such joint source--channel coding schemes have potential benefits over the traditional separated approach. More specifically, joint source--channel coding can typically achieve better performance using shorter codes than the separated approach. This is useful in scenarios with constraints on the delay of the system.</p><p>Two different flavors of joint source--channel coding are treated in this thesis; multiple description coding and channel optimized vector quantization. Channel optimized vector quantization is a technique to directly incorporate knowledge about the channel into the source coder. This thesis contributes to the field by using channel optimized vector quantization in a couple of new scenarios. Multiple description coding is the concept of encoding a source using several different descriptions in order to provide robustness in systems with losses in the transmission. One contribution of this thesis is an improvement to an existing multiple description coding scheme and another contribution is to put multiple description coding in the context of channel optimized vector quantization. The thesis also presents a simple image coder which is used to evaluate some of the results on channel optimized vector quantization.</p>
24

On error-robust source coding with image coding applications

Andersson, Tomas January 2006 (has links)
This thesis treats the problem of source coding in situations where the encoded data is subject to errors. The typical scenario is a communication system, where source data such as speech or images should be transmitted from one point to another. A problem is that most communication systems introduce some sort of error in the transmission. A wireless communication link is prone to introduce individual bit errors, while in a packet based network, such as the Internet, packet losses are the main source of error. The traditional approach to this problem is to add error correcting codes on top of the encoded source data, or to employ some scheme for retransmission of lost or corrupted data. The source coding problem is then treated under the assumption that all data that is transmitted from the source encoder reaches the source decoder on the receiving end without any errors. This thesis takes another approach to the problem and treats source and channel coding jointly under the assumption that there is some knowledge about the channel that will be used for transmission. Such joint source--channel coding schemes have potential benefits over the traditional separated approach. More specifically, joint source--channel coding can typically achieve better performance using shorter codes than the separated approach. This is useful in scenarios with constraints on the delay of the system. Two different flavors of joint source--channel coding are treated in this thesis; multiple description coding and channel optimized vector quantization. Channel optimized vector quantization is a technique to directly incorporate knowledge about the channel into the source coder. This thesis contributes to the field by using channel optimized vector quantization in a couple of new scenarios. Multiple description coding is the concept of encoding a source using several different descriptions in order to provide robustness in systems with losses in the transmission. One contribution of this thesis is an improvement to an existing multiple description coding scheme and another contribution is to put multiple description coding in the context of channel optimized vector quantization. The thesis also presents a simple image coder which is used to evaluate some of the results on channel optimized vector quantization. / QC 20101108
25

Suivi de chansons par reconnaissance automatique de parole et alignement temporel

Beaudette, David January 2010 (has links)
Le suivi de partition est défini comme étant la synchronisation sur ordinateur entre une partition musicale connue et le signal sonore de l'interprète de cette partition. Dans le cas particulier de la voix chantée, il y a encore place à l'amélioration des algorithmes existants, surtout pour le suivi de partition en temps réel. L'objectif de ce projet est donc d'arriver à mettre en oeuvre un logiciel suiveur de partition robuste et en temps-réel utilisant le signal numérisé de voix chantée et le texte des chansons. Le logiciel proposé utilise à la fois plusieurs caractéristiques de la voix chantée (énergie, correspondance avec les voyelles et nombre de passages par zéro du signal) et les met en correspondance avec la partition musicale en format MusicXML. Ces caractéristiques, extraites pour chaque trame, sont alignées aux unités phonétiques de la partition. En parallèle avec cet alignement à court terme, le système ajoute un deuxième niveau d'estimation plus fiable sur la position en associant une segmentation du signal en blocs de chant à des sections chantées en continu dans la partition. La performance du système est évaluée en présentant les alignements obtenus en différé sur 3 extraits de chansons interprétés par 2 personnes différentes, un homme et une femme, en anglais et en français.
26

The Realization Analysis of SAR Raw Data With Block Adaptive Vector Quantization Algorithm

Yang, Yun-zhi, Huang, Shun-ji, Wang, Jian-guo 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / In this paper, we discuss a Block Adaptive Vector Quantization(BAVQ) Algorithm for Synthetic Aperture Radar(SAR). And we discuss a realization method of BAVQ algorithm for SAR raw data compressing in digital signal processor. Using the algorithm and the digital signal processor, we have compressed the SIR_C/X_SAR data.
27

Exploiting spatial and temporal redundancies for vector quantization of speech and images

Meh Chu, Chu 07 January 2016 (has links)
The objective of the proposed research is to compress data such as speech, audio, and images using a new re-ordering vector quantization approach that exploits the transition probability between consecutive code vectors in a signal. Vector quantization is the process of encoding blocks of samples from a data sequence by replacing every input vector from a dictionary of reproduction vectors. Shannon’s rate-distortion theory states that signals encoded as blocks of samples have a better rate-distortion performance relative to when encoded on a sample-to-sample basis. As such, vector quantization achieves a lower coding rate for a given distortion relative to scalar quantization for any given signal. Vector quantization does not take advantage of the inter-vector correlation between successive input vectors in data sequences. It has been demonstrated that real signals have significant inter-vector correlation. This correlation has led to vector quantization approaches that encode input vectors based on previously encoded vectors. Some methods have been proposed in literature to exploit the dependence between successive code vectors. Predictive vector quantization, dynamic codebook re-ordering, and finite-state vector quantization are examples of vector quantization schemes that use intervector correlation. Predictive vector quantization and finite-state vector quantization predict the reproduction vector for a given input vector by using past input vectors. Dynamic codebook re-ordering vector quantization has the same reproduction vectors as standard vector quantization. The dynamic codebook re-ordering algorithm is based on the concept of re-ordering indices whereby existing reproduction vectors are assigned new channel indices according a structure that orders the reproduction vectors in an order of increasing dissimilarity. Hence, an input vector encoded in the standard vector quantization method is transmitted through a channel with new indices such that 0 is assigned to the closest reproduction vector to the past reproduction vector. Larger index values are assigned to reproduction vectors that have larger distances from the previous reproduction vector. Dynamic codebook re-ordering assumes that the reproduction vectors of two successive vectors of real signals are typically close to each other according to a distance metric. Sometimes, two successively encoded vectors may have relatively larger distances from each other. Our likelihood codebook re-ordering vector quantization algorithm exploits the structure within a signal by exploiting the non-uniformity in the reproduction vector transition probability in a data sequence. Input vectors that have higher probability of transition from prior reproduction vectors are assigned indices of smaller values. The code vectors that are more likely to follow a given vector are assigned indices closer to 0 while the less likely are given assigned indices of higher value. This re-ordering provides the reproduction dictionary a structure suitable for entropy coding such as Huffman and arithmetic coding. Since such transitions are common in real signals, it is expected that our proposed algorithm when combined with entropy coding algorithms such binary arithmetic and Huffman coding, will result in lower bit rates for the same distortion as a standard vector quantization algorithm. The re-ordering vector quantization approach on quantized indices can be useful in speech, images, audio transmission. By applying our re-ordering approach to these data types, we expect to achieve lower coding rates for a given distortion or perceptual quality. This reduced coding rate makes our proposed algorithm useful for transmission and storage of larger image, speech streams for their respective communication channels. The use of truncation on the likelihood codebook re-ordering scheme results in much lower compression rates without significantly distorting the perceptual quality of the signals. Today, texts and other multimedia signals may be benefit from this additional layer of likelihood re-ordering compression.
28

The Relative Importance of Input Encoding and Learning Methodology on Protein Secondary Structure Prediction

Clayton, Arnshea 09 June 2006 (has links)
In this thesis the relative importance of input encoding and learning algorithm on protein secondary structure prediction is explored. A novel input encoding, based on multidimensional scaling applied to a recently published amino acid substitution matrix, is developed and shown to be superior to an arbitrary input encoding. Both decimal valued and binary input encodings are compared. Two neural network learning algorithms, Resilient Propagation and Learning Vector Quantization, which have not previously been applied to the problem of protein secondary structure prediction, are examined. Input encoding is shown to have a greater impact on prediction accuracy than learning methodology with a binary input encoding providing the highest training and test set prediction accuracy.
29

Transmission of vector quantization over a frequency-selective Rayleigh fading CDMA channel

Nguyen, Son Xuan 19 December 2005
Recently, the transmission of vector quantization (VQ) over a code-division multiple access (CDMA) channel has received a considerable attention in research community. The complexity of the optimal decoding for VQ in CDMA communications is prohibitive for implementation, especially for systems with a medium or large number of users. A suboptimal approach to VQ decoding over a CDMA channel, disturbed by additive white Gaussian noise (AWGN), was recently developed. Such a suboptimal decoder is built from a soft-output multiuser detector (MUD), a soft bit estimator and the optimal soft VQ decoders of individual users. <p>Due to its lower complexity and good performance, such a decoding scheme is an attractive alternative to the complicated optimal decoder. It is necessary to extend this decoding scheme for a frequency-selective Rayleigh fading CDMA channel, a channel model typically seen in mobile wireless communications. This is precisely the objective of this thesis. <p>Furthermore, the suboptimal decoders are obtained not only for binary phase shift keying (BPSK), but also for M-ary pulse amplitude modulation (M-PAM). This extension offers a flexible trade-off between spectrum efficiency and performance of the systems. In addition, two algorithms based on distance measure and reliability processing are introduced as other alternatives to the suboptimal decoder. <p>Simulation results indicate that the suboptimal decoders studied in this thesis also performs very well over a frequency-selective Rayleigh fading CDMA channel.
30

Implementation and Evaluation of Image Retrieval Method Utilizing Geographic Location Metadata

Lundstedt, Magnus January 2009 (has links)
Multimedia retrieval systems are very important today with millions of content creators all over the world generating huge multimedia archives. Recent developments allows for content based image and video retrieval. These methods are often quite slow, especially if applied on a library of millions of media items. In this research a novel image retrieval method is proposed, which utilizes spatial metadata on images. By finding clusters of images based on their geographic location, the spatial metadata, and combining this information with existing content- based image retrieval algorithms, the proposed method enables efficient presentation of high quality image retrieval results to system users. Clustering methods considered include Vector Quantization, Vector Quantization LBG and DBSCAN. Clustering was performed on three different similarity measures; spatial metadata, histogram similarity or texture similarity. For histogram similarity there are many different distance metrics to use when comparing histograms. Euclidean, Quadratic Form and Earth Mover’s Distance was studied. As well as three different color spaces; RGB, HSV and CIE Lab.

Page generated in 0.4643 seconds