Spelling suggestions: "subject:"date distortion"" "subject:"rate distortion""
11 |
Video coding with adaptive vector quantization and rate distortion optimizationWagner, Marcel. Unknown Date (has links) (PDF)
University, Diss., 2000--Freiburg (Breisgau).
|
12 |
Hardware Implementation Of Conditional Motion Estimation In Video CodingKakarala, Avinash 12 1900 (has links)
This thesis presents the rate distortion analysis of conditional motion estimation, a process in which motion computation is restricted to only active pixels in the video. We model active pixels as independent and identically distributed Gaussian process and inactive pixels as Gaussian-Markov process and derive the rate distortion function based on conditional motion estimation. Rate-Distortion curves for the conditional motion estimation scheme are also presented. In addition this thesis also presents the hardware implementation of a block based motion estimation algorithm. Block matching algorithms are difficult to implement on FPGA chip due to its complexity. We implement 2D-Logarithmic search algorithm to estimate the motion vectors for the image. The matching criterion used in the algorithm is Sum of Absolute Differences (SAD). VHDL code for the motion estimation algorithm is verified using ISim and is implemented using Xilinx ISE Design tool. Synthesis results for the algorithm are also presented.
|
13 |
On the Asymptotic Rate-Distortion Function of Multiterminal Source Coding Under Logarithmic LossLi, Yanning January 2021 (has links)
We consider the asymptotic minimum rate under the logarithmic loss distortion constraint. More specifically, we find the asymptotic minimum rate expression when given distortions get close to 0. The problem under consideration is separate encoding and joint decoding of correlated two information sources, subject to a logarithmic loss distortion constraint.
We introduce a test channel, whose transition probability (conditional probability mass function) captures the encoding and decoding process. Firstly, we find the expression for the special case of doubly symmetric binary sources with binary-output test channels. Then the result is extended to the case where the test channels are arbitrary. When given distortions get close to 0, the asymptotic rate coincides with that for the aforementioned special case. Finally, we consider the general case and show that the key findings for the special case continue to hold. / Thesis / Master of Applied Science (MASc)
|
14 |
On the Rate-Distortion-Perception Tradeoff for Lossy CompressionQian, Jingjing January 2023 (has links)
Deep generative models when utilized in lossy image compression tasks can reconstruct realistic looking outputs even at extremely low bit-rates, while traditional compression methods often exhibit noticeable artifacts under similar conditions. As a result, there has been a substantial surge of interest in both the information theoretic aspects and the practical architectures of deep learning based image compression. This thesis makes contributions to the emerging framework of rate-distortion-perception theory. The main results are summarized as follows:
1. We investigate the tradeoff among rate, distortion, and perception for binary sources. The distortion considered here is the Hamming distortion and the perception quality is measured by the total variation distance. We first derive a closed-form expression for the rate-distortion-perception tradeoff in the one-shot setting. This is followed by a complete characterization of the achievable distortion-perception region for a general representation. We then consider the universal setting in which the encoder is one-size-fits-all, and derive upper and lower bounds on the minimum rate penalty. Finally, we study successive refinement for both point-wise and set-wise versions of perception-constrained lossy compression. A necessary and sufficient condition for point-wise successive refinement and a sufficient condition for the successive refinability of universal representations are provided.
2. Next, we characterize the expression for the rate-distortion-perception function of vector Gaussian sources, which extends the result in the scalar counterpart, and show that in the high-perceptual-quality regime, each component of the reconstruction (including high-frequency components) is strictly correlated with that of the source, which is in contrast to the traditional water-filling solution. This result is obtained by optimizing over all possible encoder-decoder pairs subject to the distortion and perception constraints. We then consider the notion of universal representation where the encoder is fixed and the decoder is adapted to achieve different distortion-perception pairs. We characterize the achievable distortion-perception region for a fixed representation and demonstrate that the corresponding distortion-perception tradeoff is approximately optimal.
Our findings significantly enrich the nascent rate-distortion-perception theory, establishing a solid foundation for the field of learned image compression. / None / Doctor of Philosophy (PhD)
|
15 |
Rate Distortion Optimization for Interprediction in H.264/AVC Video CodingSkeans, Jonathan P. 30 August 2013 (has links)
No description available.
|
16 |
Determining the Distributed Karhunen-Loève Transform via Convex Semidefinite RelaxationZhao, Xiaoyu January 2018 (has links)
The Karhunen–Loève Transform (KLT) is prevalent nowadays in communication and
signal processing. This thesis aims at attaining the KLT in the encoders and achieving
the minimum sum rate in the case of Gaussian multiterminal source coding.
In the general multiterminal source coding case, the data collected at the terminals
will be compressed in a distributed manner, then communicated the fusion center
for reconstruction. The data source is assumed to be a Gaussian random vector in
this thesis. We introduce the rate-distortion function to formulate the optimization
problem. The rate-distortion function focuses on achieving the minimum encoding
sum rate, subject to a given distortion. The main purpose in the thesis is to propose a
distributed KLT for encoders to deal with the sampled data and produce the minimum
sum rate.
To determine the distributed Karhunen–Loève transform, we propose three kinds
of algorithms. The rst iterative algorithm is derived directly from the saddle point
analysis of the optimization problem. Then we come up with another algorithm by
combining the original rate-distortion function with Wyner's common information,
and this algorithm still has to be solved in an iterative way. Moreover, we also propose
algorithms without iterations. This kind of algorithms will generate the unknown
variables from the existing variables and calculate the result directly.All those algorithms can make the lower-bound and upper-bound of the minimum
sum rate converge, for the gap can be reduced to a relatively small range comparing
to the value of the upper-bound and lower-bound. / Thesis / Master of Applied Science (MASc)
|
17 |
Approximate signal reconstruction from partial informationMoose, Phillip J. 10 June 2009 (has links)
It is known that transform techniques do not represent an optimal way in which to code a signal in terms of theoretical rate distortion bounds. A signal may be coded more efficiently if side information is included with the signal during transmission. This side information can then be used to reconstruct the image at some later time.
In this thesis, the type of transform coding used is Multiple Bases Representation (MBR). This coding scheme is known to perform better than transform coding that uses a single basis. The method of Projection Onto Convex Sets (POCS) is used to reconstruct an approximation to the MBR signal using the side information. Thus, any number of constraints may be used as long as they form closed and convex sets and the side information is a priori knowledge required to implement projections onto the defined closed and convex sets.
Several closed and convex sets are examined including the MBR, positivity, sign, zero crossing, minimum increase, and minimum decrease constraints. Constraints that tend to limit energy are not as effective as constraints that introduce energy into the signal especially when the observed image is used as the initialization vector.
When a different initialization vector is used, the POCS reconstruction performs considerably better. Two initialization vectors are proposed; the observed signal plus white noise and the observed signal plus a constant. The performance of POCS with initialization by the observed signal plus a constant is superior to that when using the observed signal only.
One nonconvex constraint is considered. The Laplacian histogram constraint requires other convex constraints to help ensure convergence of the reconstruction algorithm, but produces good quality images. / Master of Science
|
18 |
Communication in decentralized controlTeneketzis, Demosthenis January 1980 (has links)
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1980. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Includes bibliographical references. / by Demosthenis Teneketzis. / Ph.D.
|
19 |
Streaming Three-Dimensional Graphics with Optimized Transmission and Rendering ScalabilityTian, Dihong 13 November 2006 (has links)
Distributed three-dimensional (3D) graphics applications exhibit both resemblance and uniqueness in comparison with conventional streaming media applications. The resemblance relates to the large data volume and the bandwidth-limited and error-prone transmission channel. The uniqueness is due to the polygon-based representation of 3D geometric meshes and their accompanying attributes such as textures. This specific data format introduces sophisticated rendering computation to display graphics models and therefore places an additional constraint on the streaming application.
The objective of this research is to provide scalable, error-resilient, and time-efficient solutions for high-quality 3D graphics applications in distributed and resource-constrained environments. Resource constraints range from rate-limited and error-prone channels to insufficient data-reception, computing, and display capabilities of client devices. Optimal resource treatment with transmission and rendering scalability is important under such circumstances. The proposed research consists of three milestones. In the first milestone, we develop a joint mesh and texture optimization framework for scalable transmission and rendering of textured 3D models. Then, we address network behaviors and develop a hybrid retransmission and error protection mechanism for the on-demand delivery of 3D models. Next, we advance from individual 3D models to 3D scene databases, which contain numerous objects interacting in one geometric space, and study joint application and transport approaches. By properly addressing the properties of 3D scenes represented in multi-resolution hierarchies, we develop a joint source and channel coding method and a multi-streaming framework for streaming the content-rich 3D scene databases toward optimized transmission and rendering scalability under resource constraints.
|
20 |
Embedded system design and power-rate-distortion optimization for video encoding under energy constraintsCheng, Wenye. January 2007 (has links)
Thesis (M.S.)--University of Missouri-Columbia, 2007. / The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on January 3, 2008) Includes bibliographical references.
|
Page generated in 0.0826 seconds