Quantization of multimodal vector data in Realtime Interactive Communication Networks (RICNs) associated with application areas such as speech, video, audio, and haptic signals introduces a set of unique challenges. In particular, achieving the necessary distortion performance with minimum rate while maintaining low end-to-end delay and handling packet losses is of paramount importance. This dissertation presents vector quantization schemes which aim to satisfy these important requirements based on two source coding paradigms; 1) Predictive coding 2) Distributed source coding. Gaussian Mixture Models (GMMs) can be used to model any probability density function (pdf) with an arbitrarily small error given a sufficient number of mixture components. Hence, Gaussian Mixture Models can be effectively used to model the underlying pdfs of a variety of data in RICN applications. In this dissertation, first we present Gaussian Mixture Models Kalman predictive coding, which uses transform domain predictive GMM quantization techniques with Kalman filtering principles. In particular, we show how suitable modeling of quantization noise leads to a signal-adaptive GMM Kalman predictive coder that provides improved coding performance. Moreover, we demonstrate how running a GMM Kalman predictive coder to convergence can be used to design a stationary GMM Kalman predictive coding system which provides improved coding of GMM vector data but now with only a modest increase in run-time complexity over the baseline. Next, we address the issues of packet loss in the networks using GMM Kalman predictive coding principles. In particular, we show how an initial GMM Kalman predictive coder can be utilized to obtain a robust GMM predictive coder specifically designed to operate in packet loss. We demonstrate how one can define sets of encoding and decoding modes, and design special Kalman encoding and decoding gains for each mode. With this framework, GMM predictive coding design can be viewed as determining the special Kalman gains that minimize the expected mean squared error at the decoder in packet loss conditions. Finally, we present analytical techniques for modeling, analyzing and designing Wyner-Ziv(WZ) quantizers for Distributed Source Coding for jointly Gaussian vector data with imperfect side information. In most of the DSC implementations, the side information is not explicitly available in the decoder. Thus, almost all of the practical implementations obtain the side information from the previously decoded frames. Due to model imperfections, packet losses, previous decoding errors, and quantization noise, the available side information is usually noisy. However, the design of Wyner-Ziv quantizers for imperfect side information has not been widely addressed in the DSC literature. The analytical techniques presented in this dissertation explicitly assume the existence of imperfect side information in the decoder. Furthermore, we demonstrate how the design problem for vector data can be decomposed into independent scalar design subproblems. Then, we present the analytical techniques to compute the optimum step size and bit allocation for each scalar quantizer such that the decoder's expected vector Mean Squared Error(MSE) is minimized. The simulation results verify that the predicted MSE based on the presented analytical techniques closely follow the simulation results.
Identifer | oai:union.ndltd.org:UMIAMI/oai:scholarlyrepository.miami.edu:oa_dissertations-1373 |
Date | 22 April 2010 |
Creators | Subasingha, Subasingha Shaminda |
Publisher | Scholarly Repository |
Source Sets | University of Miami |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Open Access Dissertations |
Page generated in 0.0014 seconds