• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 1
  • Tagged with
  • 13
  • 13
  • 13
  • 9
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Approximate signal reconstruction from partial information /

Moose, Phillip J., January 1994 (has links)
Thesis (M.S.)--Virginia Polytechnic Institute and State University, 1994. / Vita. Abstract. Includes bibliographical references (leaves 105-107). Also available via the Internet.
2

Distributed compression and squashed entanglement

Savov, Ivan. January 1900 (has links)
Thesis (M.Sc.). / Written for the Dept. of Physics. Title from title page of PDF (viewed 2008/05/29). Includes bibliographical references.
3

Distributed joint power and rate adaption in ad hoc networks

Awuor, Frederick Mzee. January 2011 (has links)
M. Tech. Electrical Engineering. / This study proposes a distributive joint power and rate adaptation algorithm (JRPA) in ad hoc networks based on coupled interference minimisation. In the proposed method, the influence of coupled interference was controlled by dynamically adjusting network users' transmit power choices. The users are therefore aware of the current link status while determining their data rates. In addition, every maximize utility of other users as it maximizes its utility due to the inevitable cooperation, hence, improving a collective network performance. Solving this network utility maximization problem results in a supermodular game equivalence where users cooperate to maximise both local and global utility, hence the supermodular game theory concept was used to analyse the optimality and convergence of the proposed solution.
4

On the Asymptotic Rate-Distortion Function of Multiterminal Source Coding Under Logarithmic Loss

Li, Yanning January 2021 (has links)
We consider the asymptotic minimum rate under the logarithmic loss distortion constraint. More specifically, we find the asymptotic minimum rate expression when given distortions get close to 0. The problem under consideration is separate encoding and joint decoding of correlated two information sources, subject to a logarithmic loss distortion constraint. We introduce a test channel, whose transition probability (conditional probability mass function) captures the encoding and decoding process. Firstly, we find the expression for the special case of doubly symmetric binary sources with binary-output test channels. Then the result is extended to the case where the test channels are arbitrary. When given distortions get close to 0, the asymptotic rate coincides with that for the aforementioned special case. Finally, we consider the general case and show that the key findings for the special case continue to hold. / Thesis / Master of Applied Science (MASc)
5

Approximate signal reconstruction from partial information

Moose, Phillip J. 10 June 2009 (has links)
It is known that transform techniques do not represent an optimal way in which to code a signal in terms of theoretical rate distortion bounds. A signal may be coded more efficiently if side information is included with the signal during transmission. This side information can then be used to reconstruct the image at some later time. In this thesis, the type of transform coding used is Multiple Bases Representation (MBR). This coding scheme is known to perform better than transform coding that uses a single basis. The method of Projection Onto Convex Sets (POCS) is used to reconstruct an approximation to the MBR signal using the side information. Thus, any number of constraints may be used as long as they form closed and convex sets and the side information is a priori knowledge required to implement projections onto the defined closed and convex sets. Several closed and convex sets are examined including the MBR, positivity, sign, zero crossing, minimum increase, and minimum decrease constraints. Constraints that tend to limit energy are not as effective as constraints that introduce energy into the signal especially when the observed image is used as the initialization vector. When a different initialization vector is used, the POCS reconstruction performs considerably better. Two initialization vectors are proposed; the observed signal plus white noise and the observed signal plus a constant. The performance of POCS with initialization by the observed signal plus a constant is superior to that when using the observed signal only. One nonconvex constraint is considered. The Laplacian histogram constraint requires other convex constraints to help ensure convergence of the reconstruction algorithm, but produces good quality images. / Master of Science
6

Communication in decentralized control

Teneketzis, Demosthenis January 1980 (has links)
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1980. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Includes bibliographical references. / by Demosthenis Teneketzis. / Ph.D.
7

Embedded system design and power-rate-distortion optimization for video encoding under energy constraints

Cheng, Wenye. January 2007 (has links)
Thesis (M.S.)--University of Missouri-Columbia, 2007. / The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on January 3, 2008) Includes bibliographical references.
8

Tree Encoding of Analog Data Sources

Bodie, John Bruce 04 1900 (has links)
Concepts of tree coding and of rate-distortion theory are applied to the problem of the transmission of analog signals over digital channels. Coding schemes are developed which yield improvements of up to six dB in signal-to-noise ratio over conventional techniques for the reproduction of speech waveforms. / Thesis / Master of Engineering (MEngr)
9

Optimal source coding with signal transfer function constraints

Derpich, Milan January 2009 (has links)
Research Doctorate - Doctor of Philosophy (PhD) / This thesis presents results on optimal coding and decoding of discrete-time stochastic signals, in the sense of minimizing a distortion metric subject to a constraint on the bit-rate and on the signal transfer function from source to reconstruction. The first (preliminary) contribution of this thesis is the introduction of new distortion metric that extends the mean squared error (MSE) criterion. We give this extension the name Weighted-Correlation MSE (WCMSE), and use it as the distortion metric throughout the thesis. The WCMSE is a weighted sum of two components of the MSE: the variance of the error component uncorrelated to the source, on the one hand, and the remainder of the MSE, on the other. The WCMSE can take account of signal transfer function constraints by assigning a larger weight to deviations from a target signal transfer function than to source-uncorrelated distortion. Within this framework, the second contribution is the solution of a family of feedback quantizer design problems for wide sense stationary sources using an additive noise model for quantization errors. These associated problems consist of finding the frequency response of the filters deployed around a scalar quantizer that minimize the WCMSE for a fixed quantizer signal-to-(granular)-noise ratio (SNR). This general structure, which incorporates pre-, post-, and feedback filters, includes as special cases well known source coding schemes such as pulse coded modulation (PCM), Differential Pulse-Coded Modulation (DPCM), Sigma Delta converters, and noise-shaping coders. The optimal frequency response of each of the filters in this architecture is found for each possible subset of the remaining filters being given and fixed. These results are then applied to oversampled feedback quantization. In particular, it is shown that, within the linear model used, and for a fixed quantizer SNR, the MSE decays exponentially with oversampling ratio, provided optimal filters are used at each oversampling ratio. If a subtractively dithered quantizer is utilized, then the noise model is exact, and the SNR constraint can be directly related to the bit-rate if entropy coding is used, regardless of the number of quantization levels. On the other hand, in the case of fixed-rate quantization, the SNR is related to the number of quantization levels, and hence to the bit-rate, when overload errors are negligible. It is shown that, for sources with unbounded support, the latter condition is violated for sufficiently large oversampling ratios. By deriving an upper bound on the contribution of overload errors to the total WCMSE, a lower bound for the decay rate of the WCMSE as a function of the oversampling ratio is found for fixed-rate quantization of sources with finite or infinite support. The third main contribution of the thesis is the introduction of the rate-distortion function (RDF) when WCMSE is the distortion metric, denoted by WCMSE-RDF. We provide a complete characterization for Gaussian sources. The resulting WCMSE-RDF yields, as special cases, Shannon's RDF, as well as the recently introduced RDF for source-uncorrelated distortions (RDF-SUD). For cases where only source-uncorrelated distortion is allowed, the RDF-SUD is extended to include the possibility of linear-time invariant feedback between reconstructed signal and coder input. It is also shown that feedback quantization schemes can achieve a bit-rate only 0.254 bits/sample above this RDF by using the same filters that minimize the reconstruction MSE for a quantizer-SNR constraint. The fourth main contribution of this thesis is to provide a set of conditions under which knowledge of a realization of the RDF can be used directly to solve encoder-decoder design optimization problems. This result has direct implications in the design of subband coders with feedback, as well as in the design of encoder-decoder pairs for applications such as networked control. As the fifth main contribution of this thesis, the RDF-SUD is utilized to show that, for Gaussian sta-tionary sources with memory and MSE distortion criterion, an upper bound on the information-theoretic causal RDF can be obtained by means of an iterative numerical procedure, at all rates. This bound is tighter than 0:5 bits/sample. Moreover, if there exists a realization of the causal RDF in which the re-construction error is jointly stationary with the source, then the bound obtained coincides with the causal RDF. The iterative procedure proposed here to obtain Ritc(D) also yields a characterization of the filters in a scalar feedback quantizer having an operational rate that exceeds the bound by less than 0:254 bits/sample. This constitutes an upper bound on the optimal performance theoretically attainable by any causal source coder for stationary Gaussian sources under the MSE distortion criterion.
10

Rate Distortion Theory for Causal Video Coding: Characterization, Computation Algorithm, Comparison, and Code Design

Zheng, Lin January 2012 (has links)
Due to the sheer volume of data involved, video coding is an important application of lossy source coding, and has received wide industrial interest and support as evidenced by the development and success of a series of video coding standards. All MPEG-series and H-series video coding standards proposed so far are based upon a video coding paradigm called predictive video coding, where video source frames Xᵢ,i=1,2,...,N, are encoded in a frame by frame manner, the encoder and decoder for each frame Xᵢ, i =1, 2, ..., N, enlist help only from all previous encoded frames Sj, j=1, 2, ..., i-1. In this thesis, we will look further beyond all existing and proposed video coding standards, and introduce a new coding paradigm called causal video coding, in which the encoder for each frame Xᵢ can use all previous original frames Xj, j=1, 2, ..., i-1, and all previous encoded frames Sj, while the corresponding decoder can use only all previous encoded frames. We consider all studies, comparisons, and designs on causal video coding from an information theoretic point of view. Let R*c(D₁,...,D_N) (R*p(D₁,...,D_N), respectively) denote the minimum total rate required to achieve a given distortion level D₁,...,D_N > 0 in causal video coding (predictive video coding, respectively). A novel computation approach is proposed to analytically characterize, numerically compute, and compare the minimum total rate of causal video coding R*c(D₁,...,D_N) required to achieve a given distortion (quality) level D₁,...,D_N > 0. Specifically, we first show that for jointly stationary and ergodic sources X₁, ..., X_N, R*c(D₁,...,D_N) is equal to the infimum of the n-th order total rate distortion function R_{c,n}(D₁,...,D_N) over all n, where R_{c,n}(D₁,...,D_N) itself is given by the minimum of an information quantity over a set of auxiliary random variables. We then present an iterative algorithm for computing R_{c,n}(D₁,...,D_N) and demonstrate the convergence of the algorithm to the global minimum. The global convergence of the algorithm further enables us to not only establish a single-letter characterization of R*c(D₁,...,D_N) in a novel way when the N sources are an independent and identically distributed (IID) vector source, but also demonstrate a somewhat surprising result (dubbed the more and less coding theorem)---under some conditions on source frames and distortion, the more frames need to be encoded and transmitted, the less amount of data after encoding has to be actually sent. With the help of the algorithm, it is also shown by example that R*c(D₁,...,D_N) is in general much smaller than the total rate offered by the traditional greedy coding method by which each frame is encoded in a local optimum manner based on all information available to the encoder of the frame. As a by-product, an extended Markov lemma is established for correlated ergodic sources. From an information theoretic point of view, it is interesting to compare causal video coding and predictive video coding, which all existing video coding standards proposed so far are based upon. In this thesis, by fixing N=3, we first derive a single-letter characterization of R*p(D₁,D₂,D₃) for an IID vector source (X₁,X₂,X₃) where X₁ and X₂ are independent, and then demonstrate the existence of such X₁,X₂,X₃ for which R*p(D₁,D₂,D₃)>R*c(D₁,D₂,D₃) under some conditions on source frames and distortion. This result makes causal video coding an attractive framework for future video coding systems and standards. The design of causal video coding is also considered in the thesis from an information theoretic perspective by modeling each frame as a stationary information source. We first put forth a concept called causal scalar quantization, and then propose an algorithm for designing optimum fixed-rate causal scalar quantizers for causal video coding to minimize the total distortion among all sources. Simulation results show that in comparison with fixed-rate predictive scalar quantization, fixed-rate causal scalar quantization offers as large as 16% quality improvement (distortion reduction).

Page generated in 0.1057 seconds