• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 11
  • 4
  • 2
  • 1
  • Tagged with
  • 35
  • 18
  • 17
  • 15
  • 11
  • 9
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On Optimum Conventional Quantization for Source Coding with Side Information at the Decoder

Zheng, Lin January 2007 (has links)
In many scenarios, side information naturally exists in point-to-point communications. Although side information can be present in the encoder and/or decoder and thus yield several cases, the most important case that worths particular attention is source coding with side information at the decoder (Wyner-Ziv coding) which requires different design strategies compared to the the conventional source coding problem. Due to the difficulty caused by the joint design of random variable and reconstruction function, a common approach to this lossy source coding problem is to apply conventional vector quantization followed by Slepian-Wolf coding. In this thesis, we investigate the best rate-distortion performance achievable asymptotically by practical Wyner-Ziv coding schemes of the above approach from an information theoretic viewpoint and a numerical computation viewpoint respectively.From the information theoretic viewpoint, we establish the corresponding rate-distortion function $\hat{R}_{WZ}(D)$ for any memoryless pair $(X,Y)$ and any distortion measure. Given an arbitrary single letter distortion measure $d$, it is shown that the best rate achievable asymptotically under the constraint that $X$ is recovered with distortion level no greater than $D \geq 0$ is $\hat{R}_{WZ}(D) = \min_{\hat{X}} [I(X; \hat{X}) - I(Y; \hat{X})]$, where the minimum is taken over all auxiliary random variables $\hat{X}$ such that $Ed(X, \hat{X}) \leq D$ and $\hat{X}\to X \to Y$ is a Markov chain.Further, we are interested in designing practical Wyner-Ziv coding. With the characterization at $\hat{R}_{WZ}(D)$, this reduces to investigating $\hat{X}$. Then from the viewpoint of numerical computation, the extended Blahut-Arimoto algorithm is proposed to study the rate-distortion performance, as well as determine the random variable $\hat{X}$ that achieves $\hat{R}_{WZ}(D)$ which provids guidelines for designing practical Wyner-Ziv coding.In most cases, the random variable $\hat{X}$ that achieves $\hat{R}_{WZ}(D)$ is different from the random variable $\hat{X}'$ that achieves the classical rate-distortion $R(D)$ without side information at the decoder. Interestingly, the extended Blahut-Arimoto algorithm allows us to observe an interesting phenomenon, that is, there are indeed cases where $\hat{X} = \hat{X}'$. To gain deep insights of the quantizer's design problem between practical Wyner-Ziv coding and classic rate-distortion coding schemes, we give a mathematic proof to show under what conditions the two random quantizers are equivalent or distinct. We completely settle this problem for the case where ${\cal X}$, ${\cal Y}$, and $\hat{\cal X}$ are all binary with Hamming distortion measure.We also determine sufficient conditions (equivalent condition) for non-binary alphabets with Hamming distortion measure case and Gaussian source with mean-squared error distortion measure case respectively.
2

On Optimum Conventional Quantization for Source Coding with Side Information at the Decoder

Zheng, Lin January 2007 (has links)
In many scenarios, side information naturally exists in point-to-point communications. Although side information can be present in the encoder and/or decoder and thus yield several cases, the most important case that worths particular attention is source coding with side information at the decoder (Wyner-Ziv coding) which requires different design strategies compared to the the conventional source coding problem. Due to the difficulty caused by the joint design of random variable and reconstruction function, a common approach to this lossy source coding problem is to apply conventional vector quantization followed by Slepian-Wolf coding. In this thesis, we investigate the best rate-distortion performance achievable asymptotically by practical Wyner-Ziv coding schemes of the above approach from an information theoretic viewpoint and a numerical computation viewpoint respectively.From the information theoretic viewpoint, we establish the corresponding rate-distortion function $\hat{R}_{WZ}(D)$ for any memoryless pair $(X,Y)$ and any distortion measure. Given an arbitrary single letter distortion measure $d$, it is shown that the best rate achievable asymptotically under the constraint that $X$ is recovered with distortion level no greater than $D \geq 0$ is $\hat{R}_{WZ}(D) = \min_{\hat{X}} [I(X; \hat{X}) - I(Y; \hat{X})]$, where the minimum is taken over all auxiliary random variables $\hat{X}$ such that $Ed(X, \hat{X}) \leq D$ and $\hat{X}\to X \to Y$ is a Markov chain.Further, we are interested in designing practical Wyner-Ziv coding. With the characterization at $\hat{R}_{WZ}(D)$, this reduces to investigating $\hat{X}$. Then from the viewpoint of numerical computation, the extended Blahut-Arimoto algorithm is proposed to study the rate-distortion performance, as well as determine the random variable $\hat{X}$ that achieves $\hat{R}_{WZ}(D)$ which provids guidelines for designing practical Wyner-Ziv coding.In most cases, the random variable $\hat{X}$ that achieves $\hat{R}_{WZ}(D)$ is different from the random variable $\hat{X}'$ that achieves the classical rate-distortion $R(D)$ without side information at the decoder. Interestingly, the extended Blahut-Arimoto algorithm allows us to observe an interesting phenomenon, that is, there are indeed cases where $\hat{X} = \hat{X}'$. To gain deep insights of the quantizer's design problem between practical Wyner-Ziv coding and classic rate-distortion coding schemes, we give a mathematic proof to show under what conditions the two random quantizers are equivalent or distinct. We completely settle this problem for the case where ${\cal X}$, ${\cal Y}$, and $\hat{\cal X}$ are all binary with Hamming distortion measure.We also determine sufficient conditions (equivalent condition) for non-binary alphabets with Hamming distortion measure case and Gaussian source with mean-squared error distortion measure case respectively.
3

Wyner-Ziv coding based on TCQ and LDPC codes and extensions to multiterminal source coding

Yang, Yang 01 November 2005 (has links)
Driven by a host of emerging applications (e.g., sensor networks and wireless video), distributed source coding (i.e., Slepian-Wolf coding, Wyner-Ziv coding and various other forms of multiterminal source coding), has recently become a very active research area. In this thesis, we first design a practical coding scheme for the quadratic Gaussian Wyner-Ziv problem, because in this special case, no rate loss is suffered due to the unavailability of the side information at the encoder. In order to approach the Wyner-Ziv distortion limit D??W Z(R), the trellis coded quantization (TCQ) technique is employed to quantize the source X, and irregular LDPC code is used to implement Slepian-Wolf coding of the quantized source input Q(X) given the side information Y at the decoder. An optimal non-linear estimator is devised at the joint decoder to compute the conditional mean of the source X given the dequantized version of Q(X) and the side information Y . Assuming ideal Slepian-Wolf coding, our scheme performs only 0.2 dB away from the Wyner-Ziv limit D??W Z(R) at high rate, which mirrors the performance of entropy-coded TCQ in classic source coding. Practical designs perform 0.83 dB away from D??W Z(R) at medium rates. With 2-D trellis-coded vector quantization, the performance gap to D??W Z(R) is only 0.66 dB at 1.0 b/s and 0.47 dB at 3.3 b/s. We then extend the proposed Wyner-Ziv coding scheme to the quadratic Gaussian multiterminal source coding problem with two encoders. Both direct and indirect settings of multiterminal source coding are considered. An asymmetric code design containing one classical source coding component and one Wyner-Ziv coding component is first introduced and shown to be able to approach the corner points on the theoretically achievable limits in both settings. To approach any point on the theoretically achievable limits, a second approach based on source splitting is then described. One classical source coding component, two Wyner-Ziv coding components, and a linear estimator are employed in this design. Proofs are provided to show the achievability of any point on the theoretical limits in both settings by assuming that both the source coding and the Wyner-Ziv coding components are optimal. The performance of practical schemes is only 0.15 b/s away from the theoretical limits for the asymmetric approach, and up to 0.30 b/s away from the limits for the source splitting approach.
4

Layered Wyner-Ziv video coding for noisy channels

Xu, Qian 01 November 2005 (has links)
The growing popularity of video sensor networks and video celluar phones has generated the need for low-complexity and power-efficient multimedia systems that can handle multiple video input and output streams. While standard video coding techniques fail to satisfy these requirements, distributed source coding is a promising technique for ??uplink?? applications. Wyner-Ziv coding refers to lossy source coding with side information at the decoder. Based on recent theoretical result on successive Wyner-Ziv coding, we propose in this thesis a practical layered Wyner-Ziv video codec using the DCT, nested scalar quantizer, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information) for noiseless channel. The DCT is applied as an approximation to the conditional KLT, which makes the components of the transformed block conditionally independent given the side information. NSQ is a binning scheme that facilitates layered bit-plane coding of the bin indices while reducing the bit rate. LDPC code based Slepian-Wolf coding exploits the correlation between the quantized version of the source and the side information to achieve further compression. Different from previous works, an attractive feature of our proposed system is that video encoding is done only once but decoding allowed at many lower bit rates without quality loss. For Wyner-Ziv coding over discrete noisy channels, we present a Wyner-Ziv video codec using IRA codes for Slepian-Wolf coding based on the idea of two equivalent channels. For video streaming applications where the channel is packet based, we apply unequal error protection scheme to the embedded Wyner-Ziv coded video stream to find the optimal source-channel coding trade-off for a target transmission rate over packet erasure channel.
5

Codage de sources distribuées nouveaux outils et application à la compression vidéo /

Kubasov, Denis Guillemot, Christine January 2008 (has links) (PDF)
Thèse doctorat : Informatique : Rennes 1 : 2008. / Titre provenant de la page du titre du document électronique. Bibliogr. p. 209-218.
6

Lempel-Ziv Factorization Using Less Time and Space

Chen, Gang 08 1900 (has links)
<p> For 30 years the Lempel-Ziv factorization LZx of a string x = x[1..n] has been a fundamental data structure of string processing, especially valuable for string compression and for computing all the repetitions (runs) in x. When the Internet came in, a huge need for Lempel-Ziv factorization was created. Nowadays it has become a basic efficient data transmission format on the Internet.</p> <p> Traditionally the standard method for computing LZx was based on O(n)-time processing of the suffix tree STx of x. Ukkonen's algorithm constructs suffix tree online and so permits LZ to be built from subtrees of ST; this gives it an advantage, at least in terms of space, over the fast and compact version of McCreight's STCA [37] due to Kurtz [24]. In 2000 Abouelhoda, Kurtz & Ohlebusch proposed a O(n)-time Lempel-Ziv factorization algorithm based on an "enhanced" suffix array - that is, a suffix array SAx together with other supporting data structures.</p> <p> In this thesis we first examine some previous algorithms for computing Lempel-Ziv factorization. We then analyze the rationale of development and introduce a collection of new algorithms for computing LZ-factorization. By theoretical proof and experimental comparison based on running time and storage usage, we show that our new algorithms appear either in their theoretical behavior or in practice or both to be superior to those previously proposed. In the last chapter the conclusion of our new algorithms are given, and some open problems are pointed out for our future research.</p> / Thesis / Master of Science (MSc)
7

Graph-Based Solution for Two Scalar Quantization Problems in Network Systems

Zheng, Qixue January 2018 (has links)
This thesis addresses the optimal scalar quantizer design for two problems, i.e. the two-stage Wyner-Ziv coding problem and the multiple description coding problem for finite-alphabet sources. The optimization problems are formulated as the minimization of a weighted sum of distortions and rates. The proposed solutions are globally optimal when the cells in each partition are contiguous. The solution algorithms are both based on solving the single-source or the all-pairs minimum-weight path (MWP) problems in certain weighted directed acyclic graphs (WDAG). When the conventional dynamic programming technique is used to solve the underlying MWP problems the time complexity achieved is $O(N^3)$ for both problems, where $N$ is the size of the source alphabet. We first present the optimal design of a two-stage Wyner-Ziv scalar quantizer with forwardly or reversely degraded side information (SI) {for finite-alphabet sources and SI}. We assume that binning is performed optimally and address the design of the quantizer partitions. A solution based on dynamic programming is proposed with $O(N^3)$ time complexity. %The solution relies on finding the single-source or the all-pairs MWP in several one dimensional WDAGs. Further, a so-called {\it partial Monge property} is additionally introduced and a faster solution algorithm exploiting this property is proposed. Experimental results assess the practical performance of the proposed scheme. Then we present the optimal design of an improved modified multiple-description scalar quantizer (MMDSQ). The improvement is achieved by optimizing all the scalar quantizers. %are optimized under the assumption that all the central and side quantizers have contiguous codecells. The optimization is based on solving the single-source MWP problem in a coupled quantizer graph and the all-pairs MWP problem in a WDAG. Another variant design with the same optimization but enhanced with a better decoding process is also presented to decrease the gap to theoretical bounds. Both designs for the second problem have close or even better performances than the literature as shown in experiments. / Thesis / Master of Applied Science (MASc)
8

SOME MEASURED PERFORMANCE BOUNDS AND IMPLEMENTATION CONSIDERATIONS FOR THE LEMPEL-ZIV-WELCH DATA COMPACTION ALGORITHM

Jacobsen, H. D. 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California / Lempel-Ziv-Welch (LZW) algorithm is a popular data compaction technique that has been adopted by CCITT in its V.42bis recommendation and is often implemented in association with the V.32 standard for 9600 bps modems. It has also been implemented as Microcom Networking Protocol (MNP) Level 7, where it goes by the name of Enhanced Data Compression. LZW compacts data by encoding frequently occurring input strings with a single output symbol. The algorithm automatically generates a string dictionary for each symbol at each end of the transmission path. The amount of compaction that can be derived with the LZW algorithm varies with the type of data being transmitted and the efficiency by which table entries can be indexed. Table indexing is usually implemented by use of a hashing table. Although some manufacturers advertise a 4-to-1 gain in throughput, this seems to be an extreme case. This paper documents a implementation of the exact ZLW algorithm. The results presented in this paper are significantly less, typically on the order of 1-to-2 for ASCII text, with substantially less compaction for pre-compacted files or files containing random bit patterns. The efficiency of the LZW algorith on ASCII text is shown to be a function of dictionary size and block size. Although fewer transmitted symbols are required for larger dictionary tables, the additional bits required for the symbol index is marginally greater than the efficiency that is gained. The net effect is that dictionary sizes beyond 2K in size are increasingly less efficient for input data block sizes of 10K or more. The author concludes that the algorithm could be implemented as a direct table look-up rather than through a hashing algorithm. This would allow the LZW to be implemented with very simple firmware and with a maximum of hardware efficiency.
9

Layered Wyner-Ziv video coding: a new approach to video compression and delivery

Xu, Qian 15 May 2009 (has links)
Following recent theoretical works on successive Wyner-Ziv coding, we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantiza- tion, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered Wyner-Ziv coding for quality enhance- ment. Similar to FGS coding, there is no performance di®erence between layered and monolithic Wyner-Ziv coding when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that Wyner-Ziv coding gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks. For scalable video transmission over the Internet and 3G wireless networks, we propose a system for receiver-driven layered multicast based on layered Wyner-Ziv video coding and digital fountain coding. Digital fountain codes are near-capacity erasure codes that are ideally suited for multicast applications because of their rate- less property. By combining an error-resilient Wyner-Ziv video coder and rateless fountain codes, our system allows reliable multicast of high-quality video to an arbi- trary number of heterogeneous receivers without the requirement of feedback chan- nels. Extending this work on separate source-channel coding, we consider distributed joint source-channel coding by using a single channel code for both video compression (via Slepian-Wolf coding) and packet loss protection. We choose Raptor codes - the best approximation to a digital fountain - and address in detail both encoder and de- coder designs. Simulation results show that, compared to one separate design using Slepian-Wolf compression plus erasure protection and another based on FGS coding plus erasure protection, the proposed joint design provides better video quality at the same number of transmitted packets.
10

Optimizing Lempel-Ziv Factorization for the GPU Architecture

Ching, Bryan 01 June 2014 (has links) (PDF)
Lossless data compression is used to reduce storage requirements, allowing for the relief of I/O channels and better utilization of bandwidth. The Lempel-Ziv lossless compression algorithms form the basis for many of the most commonly used compression schemes. General purpose computing on graphic processing units (GPGPUs) allows us to take advantage of the massively parallel nature of GPUs for computations other that their original purpose of rendering graphics. Our work targets the use of GPUs for general lossless data compression. Specifically, we developed and ported an algorithm that constructs the Lempel-Ziv factorization directly on the GPU. Our implementation bypasses the sequential nature of the LZ factorization and attempts to compute the factorization in parallel. By breaking down the LZ factorization into what we call the PLZ, we are able to outperform the fastest serial CPU implementations by up to 24x and perform comparatively to a parallel multicore CPU implementation. To achieve these speeds, our implementation outputted LZ factorizations that were on average only 0.01 percent greater than the optimal solution that what could be computed sequentially. We are also able to reevaluate the fastest GPU suffix array construction algorithm, which is needed to compute the LZ factorization. We are able to find speedups of up to 5x over the fastest CPU implementations.

Page generated in 0.0699 seconds