Spelling suggestions: "subject:"ratedistortion optimization"" "subject:"predistortion optimization""
1 |
Rate-distortion optimization based on quality layer assignment for scalable matching pursuit video codingShih, Liang-chun 28 August 2009 (has links)
Although fine granularity scalability (FGS) video coding based on matching pursuits and bit-plane coding have been proven to have better coding efficiency than discrete-cosine-transform-based FGS at low bit rates, it might not be the most efficient method in terms of rate-distortion optimization (RDO). We propose a rate-distortion optimization FGS video coding by dividing a frame into blocks to generate block-based embedded bit-streams and reorganize the bit-streams into several quality layers according to the rate-distortion slopes. The comparison between FGS matching pursuit video coding and RDO-FGS matching pursuit video coding is shown in the experimental results.
|
2 |
Image and Video Coding/Transcoding: A Rate Distortion ApproachYu, Xiang January 2008 (has links)
Due to the lossy nature of image/video compression and the expensive bandwidth and computation resources in a multimedia system, one of the key design issues for image and video coding/transcoding is to optimize trade-off among distortion, rate, and/or complexity. This thesis studies the application of rate distortion (RD) optimization approaches to image and video coding/transcoding for exploring the best RD performance of a video codec compatible to the newest video coding standard H.264 and for designing computationally efficient down-sampling algorithms with high visual fidelity in the discrete Cosine transform (DCT) domain.
RD optimization for video coding in this thesis considers two objectives, i.e., to achieve the best encoding efficiency in terms of minimizing the actual RD cost and to maintain decoding compatibility with the newest video coding standard H.264. By the actual RD cost, we mean a cost based on the final reconstruction error and the entire coding rate. Specifically, an operational RD method is proposed based on a soft decision quantization (SDQ) mechanism, which has its root in a fundamental RD theoretic study on fixed-slope lossy data compression. Using SDQ instead of hard decision quantization, we establish a general framework in which motion prediction, quantization, and entropy coding in a hybrid video coding scheme such as H.264 are jointly designed to minimize the actual RD cost on a frame basis. The proposed framework is applicable to optimize any hybrid video coding scheme, provided that specific algorithms are designed corresponding to coding syntaxes of a given standard codec, so as to maintain compatibility with the standard.
Corresponding to the baseline profile syntaxes and the main profile syntaxes of H.264, respectively, we have proposed three RD algorithms---a graph-based algorithm for SDQ given motion prediction and quantization step sizes, an algorithm for residual coding optimization given motion prediction, and an iterative overall algorithm for jointly optimizing motion prediction, quantization, and entropy coding---with them embedded in the indicated order. Among the three algorithms, the SDQ design is the core, which is developed based on a given entropy coding method. Specifically, two SDQ algorithms have been developed based on the context adaptive variable length coding (CAVLC) in H.264 baseline profile and the context adaptive binary arithmetic coding (CABAC) in H.264 main profile, respectively.
Experimental results for the H.264 baseline codec optimization show that for a set of typical testing sequences, the proposed RD method for H.264 baseline coding achieves a better trade-off between rate and distortion, i.e., 12\% rate reduction on average at the same distortion (ranging from 30dB to 38dB by PSNR) when compared with the RD optimization method implemented in H.264 baseline reference codec. Experimental results for optimizing H.264 main profile coding with CABAC show 10\% rate reduction over a main profile reference codec using CABAC, which also suggests 20\% rate reduction over the RD optimization method implemented in H.264 baseline reference codec, leading to our claim of having developed the best codec in terms of RD performance, while maintaining the compatibility with H.264.
By investigating trade-off between distortion and complexity, we have also proposed a designing framework for image/video transcoding with spatial resolution reduction, i.e., to down-sample compressed images/video with an arbitrary ratio in the DCT domain. First, we derive a set of DCT-domain down-sampling methods, which can be represented by a linear transform with double-sided matrix multiplication (LTDS) in the DCT domain. Then, for a pre-selected pixel-domain down-sampling method, we formulate an optimization problem for finding an LTDS to approximate the given pixel-domain method to achieve the best trade-off between visual quality and computational complexity. The problem is then solved by modeling an LTDS with a multi-layer perceptron network and using a structural learning with forgetting algorithm for training the network. Finally, by selecting a pixel-domain reference method with the popular Butterworth lowpass filtering and cubic B-spline interpolation, the proposed framework discovers an LTDS with better visual quality and lower computational complexity when compared with state-of-the-art methods in the literature.
|
3 |
Image and Video Coding/Transcoding: A Rate Distortion ApproachYu, Xiang January 2008 (has links)
Due to the lossy nature of image/video compression and the expensive bandwidth and computation resources in a multimedia system, one of the key design issues for image and video coding/transcoding is to optimize trade-off among distortion, rate, and/or complexity. This thesis studies the application of rate distortion (RD) optimization approaches to image and video coding/transcoding for exploring the best RD performance of a video codec compatible to the newest video coding standard H.264 and for designing computationally efficient down-sampling algorithms with high visual fidelity in the discrete Cosine transform (DCT) domain.
RD optimization for video coding in this thesis considers two objectives, i.e., to achieve the best encoding efficiency in terms of minimizing the actual RD cost and to maintain decoding compatibility with the newest video coding standard H.264. By the actual RD cost, we mean a cost based on the final reconstruction error and the entire coding rate. Specifically, an operational RD method is proposed based on a soft decision quantization (SDQ) mechanism, which has its root in a fundamental RD theoretic study on fixed-slope lossy data compression. Using SDQ instead of hard decision quantization, we establish a general framework in which motion prediction, quantization, and entropy coding in a hybrid video coding scheme such as H.264 are jointly designed to minimize the actual RD cost on a frame basis. The proposed framework is applicable to optimize any hybrid video coding scheme, provided that specific algorithms are designed corresponding to coding syntaxes of a given standard codec, so as to maintain compatibility with the standard.
Corresponding to the baseline profile syntaxes and the main profile syntaxes of H.264, respectively, we have proposed three RD algorithms---a graph-based algorithm for SDQ given motion prediction and quantization step sizes, an algorithm for residual coding optimization given motion prediction, and an iterative overall algorithm for jointly optimizing motion prediction, quantization, and entropy coding---with them embedded in the indicated order. Among the three algorithms, the SDQ design is the core, which is developed based on a given entropy coding method. Specifically, two SDQ algorithms have been developed based on the context adaptive variable length coding (CAVLC) in H.264 baseline profile and the context adaptive binary arithmetic coding (CABAC) in H.264 main profile, respectively.
Experimental results for the H.264 baseline codec optimization show that for a set of typical testing sequences, the proposed RD method for H.264 baseline coding achieves a better trade-off between rate and distortion, i.e., 12\% rate reduction on average at the same distortion (ranging from 30dB to 38dB by PSNR) when compared with the RD optimization method implemented in H.264 baseline reference codec. Experimental results for optimizing H.264 main profile coding with CABAC show 10\% rate reduction over a main profile reference codec using CABAC, which also suggests 20\% rate reduction over the RD optimization method implemented in H.264 baseline reference codec, leading to our claim of having developed the best codec in terms of RD performance, while maintaining the compatibility with H.264.
By investigating trade-off between distortion and complexity, we have also proposed a designing framework for image/video transcoding with spatial resolution reduction, i.e., to down-sample compressed images/video with an arbitrary ratio in the DCT domain. First, we derive a set of DCT-domain down-sampling methods, which can be represented by a linear transform with double-sided matrix multiplication (LTDS) in the DCT domain. Then, for a pre-selected pixel-domain down-sampling method, we formulate an optimization problem for finding an LTDS to approximate the given pixel-domain method to achieve the best trade-off between visual quality and computational complexity. The problem is then solved by modeling an LTDS with a multi-layer perceptron network and using a structural learning with forgetting algorithm for training the network. Finally, by selecting a pixel-domain reference method with the popular Butterworth lowpass filtering and cubic B-spline interpolation, the proposed framework discovers an LTDS with better visual quality and lower computational complexity when compared with state-of-the-art methods in the literature.
|
4 |
Rate Distortion Optimization for Interprediction in H.264/AVC Video CodingSkeans, Jonathan P. 30 August 2013 (has links)
No description available.
|
5 |
Streaming Three-Dimensional Graphics with Optimized Transmission and Rendering ScalabilityTian, Dihong 13 November 2006 (has links)
Distributed three-dimensional (3D) graphics applications exhibit both resemblance and uniqueness in comparison with conventional streaming media applications. The resemblance relates to the large data volume and the bandwidth-limited and error-prone transmission channel. The uniqueness is due to the polygon-based representation of 3D geometric meshes and their accompanying attributes such as textures. This specific data format introduces sophisticated rendering computation to display graphics models and therefore places an additional constraint on the streaming application.
The objective of this research is to provide scalable, error-resilient, and time-efficient solutions for high-quality 3D graphics applications in distributed and resource-constrained environments. Resource constraints range from rate-limited and error-prone channels to insufficient data-reception, computing, and display capabilities of client devices. Optimal resource treatment with transmission and rendering scalability is important under such circumstances. The proposed research consists of three milestones. In the first milestone, we develop a joint mesh and texture optimization framework for scalable transmission and rendering of textured 3D models. Then, we address network behaviors and develop a hybrid retransmission and error protection mechanism for the on-demand delivery of 3D models. Next, we advance from individual 3D models to 3D scene databases, which contain numerous objects interacting in one geometric space, and study joint application and transport approaches. By properly addressing the properties of 3D scenes represented in multi-resolution hierarchies, we develop a joint source and channel coding method and a multi-streaming framework for streaming the content-rich 3D scene databases toward optimized transmission and rendering scalability under resource constraints.
|
6 |
Efficient methods for video coding and processingToivonen, T. (Tuukka) 02 January 2008 (has links)
Abstract
This thesis presents several novel improvements to video coding algorithms, including block-based motion estimation, quantization selection, and video filtering. Most of the presented improvements are fully compatible with the standards in general use, including MPEG-1, MPEG-2, MPEG-4, H.261, H.263, and H.264.
For quantization selection, new methods are developed based on the rate-distortion theory. The first method obtains locally optimal frame-level quantization parameter considering frame-wise dependencies. The method is applicable to generic optimization problems, including also motion estimation. The second method, aimed at real-time performance, heuristically modulates the quantization parameter in sequential frames improving significantly the rate-distortion performance. It also utilizes multiple reference frames when available, as in H.264. Finally, coding efficiency is improved by introducing a new matching criterion for motion estimation which can estimate the bit rate after transform coding more accurately, leading to better motion vectors.
For fast motion estimation, several improvements on prior methods are proposed. First, fast matching, based on filtering and subsampling, is combined with a state-of-the-art search strategy to create a very quick and high-quality motion estimation method. The successive elimination algorithm (SEA) is also applied to the method and its performance is improved by deriving a new tighter lower bound and increasing it with a small constant, which eliminates a larger part of the candidate motion vectors, degrading quality only insignificantly. As an alternative, the multilevel SEA (MSEA) is applied to the H.264-compatible motion estimation utilizing efficiently the various available block sizes in the standard.
Then, a new method is developed for refining the motion vector obtained from any fast and suboptimal motion estimation method. The resulting algorithm can be easily adjusted to have the desired tradeoff between computational complexity and rate-distortion performance. For refining integer motion vectors into half-pixel resolution, a new very quick but accurate method is developed based on the mathematical properties of bilinear interpolation.
Finally, novel number theoretic transforms are developed which are best suited for two-dimensional image filtering, including image restoration and enhancement, but methods are developed with a view to the use of the transforms also for very reliable motion estimation.
|
7 |
Algorithms and Hardware Co-Design of HEVC Intra EncodersZhang, Yuanzhi 01 December 2019 (has links) (PDF)
Digital video is becoming extremely important nowadays and its importance has greatly increased in the last two decades. Due to the rapid development of information and communication technologies, the demand for Ultra-High Definition (UHD) video applications is becoming stronger. However, the most prevalent video compression standard H.264/AVC released in 2003 is inefficient when it comes to UHD videos. The increasing desire for superior compression efficiency to H.264/AVC leads to the standardization of High Efficiency Video Coding (HEVC). Compared with the H.264/AVC standard, HEVC offers a double compression ratio at the same level of video quality or substantial improvement of video quality at the same video bitrate. Yet, HE-VC/H.265 possesses superior compression efficiency, its complexity is several times more than H.264/AVC, impeding its high throughput implementation. Currently, most of the researchers have focused merely on algorithm level adaptations of HEVC/H.265 standard to reduce computational intensity without considering the hardware feasibility. What’s more, the exploration of efficient hardware architecture design is not exhaustive. Only a few research works have been conducted to explore efficient hardware architectures of HEVC/H.265 standard. In this dissertation, we investigate efficient algorithm adaptations and hardware architecture design of HEVC intra encoders. We also explore the deep learning approach in mode prediction. From the algorithm point of view, we propose three efficient hardware-oriented algorithm adaptations, including mode reduction, fast coding unit (CU) cost estimation, and group-based CABAC (context-adaptive binary arithmetic coding) rate estimation. Mode reduction aims to reduce mode candidates of each prediction unit (PU) in the rate-distortion optimization (RDO) process, which is both computation-intensive and time-consuming. Fast CU cost estimation is applied to reduce the complexity in rate-distortion (RD) calculation of each CU. Group-based CABAC rate estimation is proposed to parallelize syntax elements processing to greatly improve rate estimation throughput. From the hardware design perspective, a fully parallel hardware architecture of HEVC intra encoder is developed to sustain UHD video compression at 4K@30fps. The fully parallel architecture introduces four prediction engines (PE) and each PE performs the full cycle of mode prediction, transform, quantization, inverse quantization, inverse transform, reconstruction, rate-distortion estimation independently. PU blocks with different PU sizes will be processed by the different prediction engines (PE) simultaneously. Also, an efficient hardware implementation of a group-based CABAC rate estimator is incorporated into the proposed HEVC intra encoder for accurate and high-throughput rate estimation. To take advantage of the deep learning approach, we also propose a fully connected layer based neural network (FCLNN) mode preselection scheme to reduce the number of RDO modes of luma prediction blocks. All angular prediction modes are classified into 7 prediction groups. Each group contains 3-5 prediction modes that exhibit a similar prediction angle. A rough angle detection algorithm is designed to determine the prediction direction of the current block, then a small scale FCLNN is exploited to refine the mode prediction.
|
8 |
Quality Aware Video Processing for Deep Learning Based Analytics TasksIkusan, Ademola 23 August 2022 (has links)
No description available.
|
9 |
Codage d'images avec et sans pertes à basse complexité et basé contenu / Lossy and lossless image coding with low complexity and based on the contentLiu, Yi 18 March 2015 (has links)
Ce projet de recherche doctoral vise à proposer solution améliorée du codec de codage d’images LAR (Locally Adaptive Resolution), à la fois d’un point de vue performances de compression et complexité. Plusieurs standards de compression d’images ont été proposés par le passé et mis à profit dans de nombreuses applications multimédia, mais la recherche continue dans ce domaine afin d’offrir de plus grande qualité de codage et/ou de plus faibles complexité de traitements. JPEG fut standardisé il y a vingt ans, et il continue pourtant à être le format de compression le plus utilisé actuellement. Bien qu’avec de meilleures performances de compression, l’utilisation de JPEG 2000 reste limitée due à sa complexité plus importe comparée à JPEG. En 2008, le comité de standardisation JPEG a lancé un appel à proposition appelé AIC (Advanced Image Coding). L’objectif était de pouvoir standardiser de nouvelles technologies allant au-delà des standards existants. Le codec LAR fut alors proposé comme réponse à cet appel. Le système LAR tend à associer une efficacité de compression et une représentation basée contenu. Il supporte le codage avec et sans pertes avec la même structure. Cependant, au début de cette étude, le codec LAR ne mettait pas en oeuvre de techniques d’optimisation débit/distorsions (RDO), ce qui lui fut préjudiciable lors de la phase d’évaluation d’AIC. Ainsi dans ce travail, il s’agit dans un premier temps de caractériser l’impact des principaux paramètres du codec sur l’efficacité de compression, sur la caractérisation des relations existantes entre efficacité de codage, puis de construire des modèles RDO pour la configuration des paramètres afin d’obtenir une efficacité de codage proche de l’optimal. De plus, basée sur ces modèles RDO, une méthode de « contrôle de qualité » est introduite qui permet de coder une image à une cible MSE/PSNR donnée. La précision de la technique proposée, estimée par le rapport entre la variance de l’erreur et la consigne, est d’environ 10%. En supplément, la mesure de qualité subjective est prise en considération et les modèles RDO sont appliqués localement dans l’image et non plus globalement. La qualité perceptuelle est visiblement améliorée, avec un gain significatif mesuré par la métrique de qualité objective SSIM. Avec un double objectif d’efficacité de codage et de basse complexité, un nouveau schéma de codage LAR est également proposé dans le mode sans perte. Dans ce contexte, toutes les étapes de codage sont modifiées pour un meilleur taux de compression final. Un nouveau module de classification est également introduit pour diminuer l’entropie des erreurs de prédiction. Les expérimentations montrent que ce codec sans perte atteint des taux de compression équivalents à ceux de JPEG 2000, tout en économisant 76% du temps de codage et de décodage. / This doctoral research project aims at designing an improved solution of the still image codec called LAR (Locally Adaptive Resolution) for both compression performance and complexity. Several image compression standards have been well proposed and used in the multimedia applications, but the research does not stop the progress for the higher coding quality and/or lower coding consumption. JPEG was standardized twenty years ago, while it is still a widely used compression format today. With a better coding efficiency, the application of the JPEG 2000 is limited by its larger computation cost than the JPEG one. In 2008, the JPEG Committee announced a Call for Advanced Image Coding (AIC). This call aims to standardize potential technologies going beyond existing JPEG standards. The LAR codec was proposed as one response to this call. The LAR framework tends to associate the compression efficiency and the content-based representation. It supports both lossy and lossless coding under the same structure. However, at the beginning of this study, the LAR codec did not implement the rate-distortion-optimization (RDO). This shortage was detrimental for LAR during the AIC evaluation step. Thus, in this work, it is first to characterize the impact of the main parameters of the codec on the compression efficiency, next to construct the RDO models to configure parameters of LAR for achieving optimal or sub-optimal coding efficiencies. Further, based on the RDO models, a “quality constraint” method is introduced to encode the image at a given target MSE/PSNR. The accuracy of the proposed technique, estimated by the ratio between the error variance and the setpoint, is about 10%. Besides, the subjective quality measurement is taken into consideration and the RDO models are locally applied in the image rather than globally. The perceptual quality is improved with a significant gain measured by the objective quality metric SSIM (structural similarity). Aiming at a low complexity and efficient image codec, a new coding scheme is also proposed in lossless mode under the LAR framework. In this context, all the coding steps are changed for a better final compression ratio. A new classification module is also introduced to decrease the entropy of the prediction errors. Experiments show that this lossless codec achieves the equivalent compression ratio to JPEG 2000, while saving 76% of the time consumption in average in encoding and decoding.
|
10 |
Transform Coefficient Thresholding and Lagrangian Optimization for H.264 Video Coding / Transformkoefficient-tröskling och Lagrangeoptimering för H.264 VideokodningCarlsson, Pontus January 2004 (has links)
<p>H.264, also known as MPEG-4 Part 10: Advanced Video Coding, is the latest MPEG standard for video coding. It provides approximately 50% bit rate savings for equivalent perceptual quality compared to any previous standard. In the same fashion as previous MPEG standards, only the bitstream syntax and the decoder are specified. Hence, coding performance is not only determined by the standard itself but also by the implementation of the encoder. In this report we propose two methods for improving the coding performance while remaining fully compliant to the standard. </p><p>After transformation and quantization, the transform coefficients are usually entropy coded and embedded in the bitstream. However, some of them might be beneficial to discard if the number of saved bits are sufficiently large. This is usually referred to as coefficient thresholding and is investigated in the scope of H.264 in this report. </p><p>Lagrangian optimization for video compression has proven to yield substantial improvements in perceived quality and the H.264 Reference Software has been designed around this concept. When performing Lagrangian optimization, lambda is a crucial parameter that determines the tradeoff between rate and distortion. We propose a new method to select lambda and the quantization parameter for non-reference frames in H.264. </p><p>The two methods are shown to achieve significant improvements. When combined, they reduce the bitrate around 12%, while preserving the video quality in terms of average PSNR. </p><p>To aid development of H.264, a software tool has been created to visualize the coding process and present statistics. This tool is capable of displaying information such as bit distribution, motion vectors, predicted pictures and motion compensated block sizes.</p>
|
Page generated in 0.1619 seconds