• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 15
  • 6
  • 3
  • Tagged with
  • 68
  • 68
  • 26
  • 18
  • 18
  • 16
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Lattice-based Robust Distributed Coding Scheme for Correlated Sources

Elzouki, Dania January 2018 (has links)
In this thesis we propose two lattice-based robust distributed source coding systems, one for two correlated sources and the other for three correlated sources. We provide a detailed performance analysis under the high resolution assumption. It is shown that, in a certain asymptotic regime, our scheme for two correlated sources achieves the information-theoretic limit of quadratic multiple description coding (MDC) when the lattice dimension goes to infinity, whereas a variant of the random coding scheme by Chen and Berger with Gaussian codes is 0.5 bits away from this limit. Our analysis also shows that, under the same asymptotic regime, when the lattice dimension goes to infinity, the proposed scheme for three correlated sources is very close to the theoretical bound for the symmetric quadratic Gaussian MDC problem with single description and all three descriptions decoders. / Thesis / Doctor of Philosophy (PhD)
12

Determining the Distributed Karhunen-Loève Transform via Convex Semidefinite Relaxation

Zhao, Xiaoyu January 2018 (has links)
The Karhunen–Loève Transform (KLT) is prevalent nowadays in communication and signal processing. This thesis aims at attaining the KLT in the encoders and achieving the minimum sum rate in the case of Gaussian multiterminal source coding. In the general multiterminal source coding case, the data collected at the terminals will be compressed in a distributed manner, then communicated the fusion center for reconstruction. The data source is assumed to be a Gaussian random vector in this thesis. We introduce the rate-distortion function to formulate the optimization problem. The rate-distortion function focuses on achieving the minimum encoding sum rate, subject to a given distortion. The main purpose in the thesis is to propose a distributed KLT for encoders to deal with the sampled data and produce the minimum sum rate. To determine the distributed Karhunen–Loève transform, we propose three kinds of algorithms. The rst iterative algorithm is derived directly from the saddle point analysis of the optimization problem. Then we come up with another algorithm by combining the original rate-distortion function with Wyner's common information, and this algorithm still has to be solved in an iterative way. Moreover, we also propose algorithms without iterations. This kind of algorithms will generate the unknown variables from the existing variables and calculate the result directly.All those algorithms can make the lower-bound and upper-bound of the minimum sum rate converge, for the gap can be reduced to a relatively small range comparing to the value of the upper-bound and lower-bound. / Thesis / Master of Applied Science (MASc)
13

Source-Channel Coding in Networks

Wernersson, Niklas January 2008 (has links)
The aim of source coding is to represent information as accurately as possible using as few bits as possible and in order to do so redundancy from the source needs to be removed. The aim of channel coding is in some sense the contrary, namely to introduce redundancy that can be exploited to protect the information when being transmitted over a nonideal channel. Combining these two techniques leads to the area of joint source-channel coding which in general makes it possible to achieve a better performance when designing a communication system than in the case when source and channel codes are designed separately. In this thesis four particular areas in joint source-channel coding are studied: analog (i.e. continuous) bandwidth expansion, distributed source coding over noisy channels, multiple description coding (MDC) and soft decoding. A general analog bandwidth expansion code based on orthogonal polynomials is proposed and analyzed. The code has a performance comparable with other existing schemes. However, the code is more general in the sense that it is implementable for a larger number of source distributions. The problem of distributed source coding over noisy channels is studied. Two schemes are proposed and analyzed for this problem which both work on a sample by sample basis. The first code is based on scalar quantization optimized for a certain channel characteristics. The second code is nonlinear and analog. Two new MDC schemes are proposed and investigated. The first is based on sorting a frame of samples and transmitting, as side-information/redundancy, an index that describes the resulting permutation. In case that some of the transmitted descriptors are lost during transmission this side information (if received) can be used to estimate the lost descriptors based on the received ones. The second scheme uses permutation codes to produce different descriptions of a block of source data. These descriptions can be used jointly to estimate the original source data. Finally, also the MDC method multiple description coding using pairwise correlating transforms as introduced by Wang et al. is studied. A modi fication of the quantization in this method is proposed which yields a performance gain. A well known result in joint source-channel coding is that the performance of a communication system can be improved by using soft decoding of the channel output at the cost of a higher decoding complexity. An alternative to this is to quantize the soft information and store the pre-calculated soft decision values in a lookup table. In this thesis we propose new methods for quantizing soft channel information, to be used in conjunction with soft-decision source decoding. The issue on how to best construct finite-bandwidth representations of soft information is also studied. / QC 20100920
14

Network compression via network memory: fundamental performance limits

Beirami, Ahmad 08 June 2015 (has links)
The amount of information that is churned out daily around the world is staggering, and hence, future technological advancements are contingent upon development of scalable acquisition, inference, and communication mechanisms for this massive data. This Ph.D. dissertation draws upon mathematical tools from information theory and statistics to understand the fundamental performance limits of universal compression of this massive data at the packet level using universal compression just above layer 3 of the network when the intermediate network nodes are enabled with the capability of memorizing the previous traffic. Universality of compression imposes an inevitable redundancy (overhead) to the compression performance of universal codes, which is due to the learning of the unknown source statistics. In this work, the previous asymptotic results about the redundancy of universal compression are generalized to consider the performance of universal compression at the finite-length regime (that is applicable to small network packets). Further, network compression via memory is proposed as a compression-based solution for the compression of relatively small network packets whenever the network nodes (i.e., the encoder and the decoder) are equipped with memory and have access to massive amounts of previous communication. In a nutshell, network compression via memory learns the patterns and statistics of the payloads of the packets and uses it for compression and reduction of the traffic. Network compression via memory, with the cost of increasing the computational overhead in the network nodes, significantly reduces the transmission cost in the network. This leads to huge performance improvement as the cost of transmitting one bit is by far greater than the cost of processing it.
15

Exploiting spatial and temporal redundancies for vector quantization of speech and images

Meh Chu, Chu 07 January 2016 (has links)
The objective of the proposed research is to compress data such as speech, audio, and images using a new re-ordering vector quantization approach that exploits the transition probability between consecutive code vectors in a signal. Vector quantization is the process of encoding blocks of samples from a data sequence by replacing every input vector from a dictionary of reproduction vectors. Shannon’s rate-distortion theory states that signals encoded as blocks of samples have a better rate-distortion performance relative to when encoded on a sample-to-sample basis. As such, vector quantization achieves a lower coding rate for a given distortion relative to scalar quantization for any given signal. Vector quantization does not take advantage of the inter-vector correlation between successive input vectors in data sequences. It has been demonstrated that real signals have significant inter-vector correlation. This correlation has led to vector quantization approaches that encode input vectors based on previously encoded vectors. Some methods have been proposed in literature to exploit the dependence between successive code vectors. Predictive vector quantization, dynamic codebook re-ordering, and finite-state vector quantization are examples of vector quantization schemes that use intervector correlation. Predictive vector quantization and finite-state vector quantization predict the reproduction vector for a given input vector by using past input vectors. Dynamic codebook re-ordering vector quantization has the same reproduction vectors as standard vector quantization. The dynamic codebook re-ordering algorithm is based on the concept of re-ordering indices whereby existing reproduction vectors are assigned new channel indices according a structure that orders the reproduction vectors in an order of increasing dissimilarity. Hence, an input vector encoded in the standard vector quantization method is transmitted through a channel with new indices such that 0 is assigned to the closest reproduction vector to the past reproduction vector. Larger index values are assigned to reproduction vectors that have larger distances from the previous reproduction vector. Dynamic codebook re-ordering assumes that the reproduction vectors of two successive vectors of real signals are typically close to each other according to a distance metric. Sometimes, two successively encoded vectors may have relatively larger distances from each other. Our likelihood codebook re-ordering vector quantization algorithm exploits the structure within a signal by exploiting the non-uniformity in the reproduction vector transition probability in a data sequence. Input vectors that have higher probability of transition from prior reproduction vectors are assigned indices of smaller values. The code vectors that are more likely to follow a given vector are assigned indices closer to 0 while the less likely are given assigned indices of higher value. This re-ordering provides the reproduction dictionary a structure suitable for entropy coding such as Huffman and arithmetic coding. Since such transitions are common in real signals, it is expected that our proposed algorithm when combined with entropy coding algorithms such binary arithmetic and Huffman coding, will result in lower bit rates for the same distortion as a standard vector quantization algorithm. The re-ordering vector quantization approach on quantized indices can be useful in speech, images, audio transmission. By applying our re-ordering approach to these data types, we expect to achieve lower coding rates for a given distortion or perceptual quality. This reduced coding rate makes our proposed algorithm useful for transmission and storage of larger image, speech streams for their respective communication channels. The use of truncation on the likelihood codebook re-ordering scheme results in much lower compression rates without significantly distorting the perceptual quality of the signals. Today, texts and other multimedia signals may be benefit from this additional layer of likelihood re-ordering compression.
16

Source and Channel Coding for Audiovisual Communication Systems

Kim, Moo Young January 2004 (has links)
Topics in source and channel coding for audiovisual communication systems are studied. The goal of source coding is to represent a source with the lowest possible rate to achieve a particular distortion, or with the lowest possible distortion at a given rate. Channel coding adds redundancy to quantized source information to recover channel errors. This thesis consists of four topics. Firstly, based on high-rate theory, we propose Karhunen-Loéve transform (KLT)-based classified vector quantization (VQ) to efficiently utilize optimal VQ advantages over scalar quantization (SQ). Compared with code-excited linear predictive (CELP) speech coding, KLT-based classified VQ provides not only a higher SNR and perceptual quality, but also lower computational complexity. Further improvement is obtained by companding. Secondly, we compare various transmitter-based packet-loss recovery techniques from a rate-distortion viewpoint for real-time audiovisual communication systems over the Internet. We conclude that, in most circumstances, multiple description coding (MDC) is the best packet-loss recovery technique. If channel conditions are informed, channel-optimized MDC yields better performance. Compared with resolution-constrained quantization (RCQ), entropy-constrained quantization (ECQ) produces a smaller number of distortion outliers but is more sensitive to channel errors. We apply a generalized γ-th power distortion measure to design a new RCQ algorithm that has less distortion outliers and is more robust against source mismatch than conventional RCQ methods. Finally, designing quantizers to effectively remove irrelevancy as well as redundancy is considered. Taking into account the just noticeable difference (JND) of human perception, we design a new RCQ method that has improved performance in terms of mean distortion and distortion outliers. Based on high-rate theory, optimal centroid density and its corresponding mean distortion are also accurately predicted. The latter two quantization methods can be combined with practical source coding systems such as KLT-based classified VQ and with joint source-channel coding paradigms such as MDC.
17

Codage de source ciblé : Transmission sécurisée, détection / Task-oriented source coding : Secure transmission, detection

Villard, Joffrey 01 December 2011 (has links)
Cette thèse porte sur quelques problèmes de codage de source ciblé. Il s'agit de compresser/quantifier une source d'information dans le but de réaliser une tâche déterminée. Contrairement aux méthodes mises en œuvre dans les systèmes de communication classiques, où la bonne reconstruction de la source au récepteur est l'objectif principal, l'opération effectuée in fine est ici prise en compte tout au long du processus (de l'observation des données à leur transmission). En particulier, nous démontrons des résultats fondamentaux sur le codage de source dans les transmissions sécurisées (suivant l'approche de Shannon) et la quantification haute-résolution pour la détection (suivant l'approche de Bennett). Dans les deux cas, les caractéristiques de l'environnement peuvent être judicieusement prises en compte pour améliorer les performances du système. Ces résultats trouvent des applications dans de nombreux domaines pratiques (par ex. pour les contrôles en cours de production, la surveillance, la veille environnementale, la diffusion de contenus multimédia, etc.). / This thesis investigates some task-oriented source coding problems. In this framework, an information source is compressed/quantized in view of the task for which it is to be used. While the methods implemented in traditional communication systems aim at enabling good reconstruction of the source, the eventual use of the data is here considered all along the process (from their observation to their transmission). In particular, we derive fundamental results on source coding for secure transmission (following Shannon's approach) and high-rate quantization for detection (following Bennett's approach). In both cases, the characteristics of the environment are used in a smart way to enhance the overall performance of the system. Applications of these results arise in many practical contexts (e.g. for production control, field surveillance, environmental monitoring, multimedia broadcasting, etc.).
18

Secure Text Communication for the Tiger XS

Hertz, David January 2006 (has links)
<p>The option of communicating via SMS messages can be considered available in all GSM networks. It therefore constitutes a almost universally available method for mobile communication.</p><p>The Tiger XS, a device for secure communication manufactured by Sectra, is equipped with an encrypted text message transmission system. As the text message service of this device is becoming increasingly popular and as options to connect the Tiger XS to computers or to a keyboard are being researched, the text message service is in need of upgrade.</p><p>This thesis proposes amendments to the existing protocol structure. It thoroughly examines a number of options for source coding of small text messages and makes recommendations as to implementation of such features. It also suggests security enhancements and introduces a novel form of stegangraphy.</p>
19

Quantization for Low Delay and Packet Loss

Subasingha, Subasingha Shaminda 22 April 2010 (has links)
Quantization of multimodal vector data in Realtime Interactive Communication Networks (RICNs) associated with application areas such as speech, video, audio, and haptic signals introduces a set of unique challenges. In particular, achieving the necessary distortion performance with minimum rate while maintaining low end-to-end delay and handling packet losses is of paramount importance. This dissertation presents vector quantization schemes which aim to satisfy these important requirements based on two source coding paradigms; 1) Predictive coding 2) Distributed source coding. Gaussian Mixture Models (GMMs) can be used to model any probability density function (pdf) with an arbitrarily small error given a sufficient number of mixture components. Hence, Gaussian Mixture Models can be effectively used to model the underlying pdfs of a variety of data in RICN applications. In this dissertation, first we present Gaussian Mixture Models Kalman predictive coding, which uses transform domain predictive GMM quantization techniques with Kalman filtering principles. In particular, we show how suitable modeling of quantization noise leads to a signal-adaptive GMM Kalman predictive coder that provides improved coding performance. Moreover, we demonstrate how running a GMM Kalman predictive coder to convergence can be used to design a stationary GMM Kalman predictive coding system which provides improved coding of GMM vector data but now with only a modest increase in run-time complexity over the baseline. Next, we address the issues of packet loss in the networks using GMM Kalman predictive coding principles. In particular, we show how an initial GMM Kalman predictive coder can be utilized to obtain a robust GMM predictive coder specifically designed to operate in packet loss. We demonstrate how one can define sets of encoding and decoding modes, and design special Kalman encoding and decoding gains for each mode. With this framework, GMM predictive coding design can be viewed as determining the special Kalman gains that minimize the expected mean squared error at the decoder in packet loss conditions. Finally, we present analytical techniques for modeling, analyzing and designing Wyner-Ziv(WZ) quantizers for Distributed Source Coding for jointly Gaussian vector data with imperfect side information. In most of the DSC implementations, the side information is not explicitly available in the decoder. Thus, almost all of the practical implementations obtain the side information from the previously decoded frames. Due to model imperfections, packet losses, previous decoding errors, and quantization noise, the available side information is usually noisy. However, the design of Wyner-Ziv quantizers for imperfect side information has not been widely addressed in the DSC literature. The analytical techniques presented in this dissertation explicitly assume the existence of imperfect side information in the decoder. Furthermore, we demonstrate how the design problem for vector data can be decomposed into independent scalar design subproblems. Then, we present the analytical techniques to compute the optimum step size and bit allocation for each scalar quantizer such that the decoder's expected vector Mean Squared Error(MSE) is minimized. The simulation results verify that the predicted MSE based on the presented analytical techniques closely follow the simulation results.
20

Slepian-Wolf coded nested quantization (SEC-NQ) for Wyner-Ziv coding: high-rate performance analysis, code design, and application to cooperative networks

Liu, Zhixin 15 May 2009 (has links)
No description available.

Page generated in 0.0713 seconds