• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 23
  • 15
  • 15
  • 11
  • 11
  • 11
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Efficient methods for video coding and processing

Toivonen, T. (Tuukka) 02 January 2008 (has links)
Abstract This thesis presents several novel improvements to video coding algorithms, including block-based motion estimation, quantization selection, and video filtering. Most of the presented improvements are fully compatible with the standards in general use, including MPEG-1, MPEG-2, MPEG-4, H.261, H.263, and H.264. For quantization selection, new methods are developed based on the rate-distortion theory. The first method obtains locally optimal frame-level quantization parameter considering frame-wise dependencies. The method is applicable to generic optimization problems, including also motion estimation. The second method, aimed at real-time performance, heuristically modulates the quantization parameter in sequential frames improving significantly the rate-distortion performance. It also utilizes multiple reference frames when available, as in H.264. Finally, coding efficiency is improved by introducing a new matching criterion for motion estimation which can estimate the bit rate after transform coding more accurately, leading to better motion vectors. For fast motion estimation, several improvements on prior methods are proposed. First, fast matching, based on filtering and subsampling, is combined with a state-of-the-art search strategy to create a very quick and high-quality motion estimation method. The successive elimination algorithm (SEA) is also applied to the method and its performance is improved by deriving a new tighter lower bound and increasing it with a small constant, which eliminates a larger part of the candidate motion vectors, degrading quality only insignificantly. As an alternative, the multilevel SEA (MSEA) is applied to the H.264-compatible motion estimation utilizing efficiently the various available block sizes in the standard. Then, a new method is developed for refining the motion vector obtained from any fast and suboptimal motion estimation method. The resulting algorithm can be easily adjusted to have the desired tradeoff between computational complexity and rate-distortion performance. For refining integer motion vectors into half-pixel resolution, a new very quick but accurate method is developed based on the mathematical properties of bilinear interpolation. Finally, novel number theoretic transforms are developed which are best suited for two-dimensional image filtering, including image restoration and enhancement, but methods are developed with a view to the use of the transforms also for very reliable motion estimation.
22

Algorithms and Hardware Co-Design of HEVC Intra Encoders

Zhang, Yuanzhi 01 December 2019 (has links) (PDF)
Digital video is becoming extremely important nowadays and its importance has greatly increased in the last two decades. Due to the rapid development of information and communication technologies, the demand for Ultra-High Definition (UHD) video applications is becoming stronger. However, the most prevalent video compression standard H.264/AVC released in 2003 is inefficient when it comes to UHD videos. The increasing desire for superior compression efficiency to H.264/AVC leads to the standardization of High Efficiency Video Coding (HEVC). Compared with the H.264/AVC standard, HEVC offers a double compression ratio at the same level of video quality or substantial improvement of video quality at the same video bitrate. Yet, HE-VC/H.265 possesses superior compression efficiency, its complexity is several times more than H.264/AVC, impeding its high throughput implementation. Currently, most of the researchers have focused merely on algorithm level adaptations of HEVC/H.265 standard to reduce computational intensity without considering the hardware feasibility. What’s more, the exploration of efficient hardware architecture design is not exhaustive. Only a few research works have been conducted to explore efficient hardware architectures of HEVC/H.265 standard. In this dissertation, we investigate efficient algorithm adaptations and hardware architecture design of HEVC intra encoders. We also explore the deep learning approach in mode prediction. From the algorithm point of view, we propose three efficient hardware-oriented algorithm adaptations, including mode reduction, fast coding unit (CU) cost estimation, and group-based CABAC (context-adaptive binary arithmetic coding) rate estimation. Mode reduction aims to reduce mode candidates of each prediction unit (PU) in the rate-distortion optimization (RDO) process, which is both computation-intensive and time-consuming. Fast CU cost estimation is applied to reduce the complexity in rate-distortion (RD) calculation of each CU. Group-based CABAC rate estimation is proposed to parallelize syntax elements processing to greatly improve rate estimation throughput. From the hardware design perspective, a fully parallel hardware architecture of HEVC intra encoder is developed to sustain UHD video compression at 4K@30fps. The fully parallel architecture introduces four prediction engines (PE) and each PE performs the full cycle of mode prediction, transform, quantization, inverse quantization, inverse transform, reconstruction, rate-distortion estimation independently. PU blocks with different PU sizes will be processed by the different prediction engines (PE) simultaneously. Also, an efficient hardware implementation of a group-based CABAC rate estimator is incorporated into the proposed HEVC intra encoder for accurate and high-throughput rate estimation. To take advantage of the deep learning approach, we also propose a fully connected layer based neural network (FCLNN) mode preselection scheme to reduce the number of RDO modes of luma prediction blocks. All angular prediction modes are classified into 7 prediction groups. Each group contains 3-5 prediction modes that exhibit a similar prediction angle. A rough angle detection algorithm is designed to determine the prediction direction of the current block, then a small scale FCLNN is exploited to refine the mode prediction.
23

Tree Encoding of Analog Data Sources

Bodie, John Bruce 04 1900 (has links)
Concepts of tree coding and of rate-distortion theory are applied to the problem of the transmission of analog signals over digital channels. Coding schemes are developed which yield improvements of up to six dB in signal-to-noise ratio over conventional techniques for the reproduction of speech waveforms. / Thesis / Master of Engineering (MEngr)
24

Quality Aware Video Processing for Deep Learning Based Analytics Tasks

Ikusan, Ademola 23 August 2022 (has links)
No description available.
25

Fast Rate-Distortion Optimal Packetization of Embedded Bitstreams into Independent Source Packets

Xu, Jiayi January 2011 (has links)
<p>This thesis addresses the rate-distortion optimal packetization (RDOP) of embedded bitstreams into independent source packets for the purpose of limiting error propagation in transmission over packet noisy channels. The embedded stream is assumed to be an interleaving of $K$ independently decodable basic streams. The goal is to partition these basic streams into $N (N</p> <p>The RDOP problem previously formulated by Wu \emph{el al.} focused on finding the partition that minimizes the distortion when all packets are decoded. The authors proposed a dynamic programming algorithm which worked under both high bit rate and low bit rate scenarios. In this thesis, we extend the problem formulation to finding the partition which minimizes the expected distortion at the receiver for a wide range of transmission scenarios including unequal/equal error/erasure protection and multiple description codes. Then we show that the dynamic programming algorithm of \citep{DBLP:journals/tmm/WuCX01} can be extended to solve the new RDOP problem.</p> <p>Furthermore, we propose a faster algorithm to find the globally optimal solution based on the divide-and-conquer technique, under the assumption that all \emph{basic} streams have convex rate-distortion curves. The proposed algorithm reduces the running time from $O(K^{2}LN)$ achieved by the dynamic programming solution to $O(NKL\log K)$. Experiments performed on SPIHT coded images further validate that the speed up is significant in practice.</p> / Master of Applied Science (MASc)
26

Symmetric Generalized Gaussian Multiterminal Source Coding

Chang, Yameng Jr January 2018 (has links)
Consider a generalized multiterminal source coding system, where 􏱡(l choose m) 􏱢 encoders, each m observing a distinct size-m subset of l (l ≥ 2) zero-mean unit-variance symmetrically correlated Gaussian sources with correlation coefficient ρ, compress their observation in such a way that a joint decoder can reconstruct the sources within a prescribed mean squared error distortion based on the compressed data. The optimal rate- distortion performance of this system was previously known only for the two extreme cases m = l (the centralized case) and m = 1 (the distributed case), and except when ρ = 0, the centralized system can achieve strictly lower compression rates than the distributed system under all non-trivial distortion constaints. Somewhat surprisingly, it is established in the present thesis that the optimal rate-distortion preformance of the afore-described generalized multiterminal source coding system with m ≥ 2 coincides with that of the centralized system for all distortions when ρ ≤ 0 and for distortions below an explicit positive threshold (depending on m) when ρ > 0. Moreover, when ρ > 0, the minimum achievable rate of generalized multiterminal source coding subject to an arbitrary positive distortion constraint d is shown to be within a finite gap (depending on m and d) from its centralized counterpart in the large l limit except for possibly the critical distortion d = 1 − ρ. / Thesis / Master of Applied Science (MASc)
27

Robust Distributed Compression of Symmetrically Correlated Gaussian Sources

Zhang, Xuan January 2018 (has links)
Consider a lossy compression system with l distributed encoders and a centralized decoder. Each encoder compresses its observed source and forwards the compressed data to the decoder for joint reconstruction of the target signals under the mean squared error distortion constraint. It is assumed that the observed sources can be expressed as the sum of the target signals and the corruptive noises, which are generated independently from two (possibly di erent) symmetric multivariate Gaussian distributions. Depending on the parameters of such Gaussian distributions, the rate-distortion limit of this lossy compression system is characterized either completely or for a subset of distortions (including, but not necessarily limited to, those su fficiently close to the minimum distortion achievable when the observed sources are directly available at the decoder). The results are further extended to the robust distributed compression setting, where the outputs of a subset of encoders may also be used to produce a non-trivial reconstruction of the corresponding target signals. In particular, we obtain in the high-resolution regime a precise characterization of the minimum achievable reconstruction distortion based on the outputs of k + 1 or more encoders when every k out of all l encoders are operated collectively in the same mode that is greedy in the sense of minimizing the distortion incurred by the reconstruction of the corresponding k target signals with respect to the average rate of these k encoders. / Thesis / Master of Applied Science (MASc)
28

Joint source video coding : joint rate control for H.264/AVC video coding

Teixeira, Luís Miguel Lopes January 2012 (has links)
Tese de doutoramento. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 2012
29

Error Correction and Concealment of Bock Based, Motion-Compensated Temporal Predition, Transform Coded Video

Robie, David Lee 30 March 2005 (has links)
Error Correction and Concealment of Block Based, Motion-Compensated Temporal Prediction, Transform Coded Video David L. Robie 133 Pages Directed by Dr. Russell M. Mersereau The use of the Internet and wireless networks to bring multimedia to the consumer continues to expand. The transmission of these products is always subject to corruption due to errors such as bit errors or lost and ill-timed packets; however, in many cases, such as real time video transmission, retransmission request (ARQ) is not practical. Therefore receivers must be capable of recovering from corrupted data. Errors can be mitigated using forward error correction in the encoder or error concealment techniques in the decoder. This thesis investigates the use of forward error correction (FEC) techniques in the encoder and error concealment in the decoder in block-based, motion-compensated, temporal prediction, transform codecs. It will show improvement over standard FEC applications and improvements in error concealment relative to the Motion Picture Experts Group (MPEG) standard. To this end, this dissertation will describe the following contributions and proofs-of-concept in the area of error concealment and correction in block-based video transmission. A temporal error concealment algorithm which uses motion-compensated macroblocks from previous frames. A spatial error concealment algorithm which uses the Hough transform to detect edges in both foreground and background colors and using directional interpolation or directional filtering to provide improved edge reproduction. A codec which uses data hiding to transmit error correction information. An enhanced codec which builds upon the last by improving the performance of the codec in the error-free environment while maintaining excellent error recovery capabilities. A method to allocate Reed-Solomon (R-S) packet-based forward error correction that will decrease distortion (using a PSNR metric) at the receiver compared to standard FEC techniques. Finally, under the constraints of a constant bit rate, the tradeoff between traditional R-S FEC and alternate forward concealment information (FCI) is evaluated. Each of these developments is compared and contrasted to state of the art techniques and are able to show improvements using widely accepted metrics. The dissertation concludes with a discussion of future work.
30

Prioritized 3d Scene Reconstruction And Rate-distortion Efficient Representation For Video Sequences

Imre, Evren 01 August 2007 (has links) (PDF)
In this dissertation, a novel scheme performing 3D reconstruction of a scene from a 2D video sequence is presented. To this aim, first, the trajectories of the salient features in the scene are determined as a sequence of displacements via Kanade-Lukas-Tomasi tracker and Kalman filter. Then, a tentative camera trajectory with respect to a metric reference reconstruction is estimated. All frame pairs are ordered with respect to their amenability to 3D reconstruction by a metric that utilizes the baseline distances and the number of tracked correspondences between the frames. The ordered frame pairs are processed via a sequential structure-from-motion algorithm to estimate the sparse structure and camera matrices. The metric and the associated reconstruction algorithm are shown to outperform their counterparts in the literature via experiments. Finally, a mesh-based, rate-distortion efficient representation is constructed through a novel procedure driven by the error between a target image, and its prediction from a reference image and the current mesh. At each iteration, the triangular patch, whose projection on the predicted image has the largest error, is identified. Within this projected region and its correspondence on the reference frame, feature matches are extracted. The pair with the least conformance to the planar model is used to determine the vertex to be added to the mesh. The procedure is shown to outperform the dense depth-map representation in all tested cases, and the block motion vector representation, in scenes with large depth range, in rate-distortion sense.

Page generated in 0.1181 seconds