Spelling suggestions: "subject:"video compression"" "subject:"ideo compression""
161 |
Volumetric reconstruction of rigid objects from image sequences.Ramchunder, Naren. January 2012 (has links)
Live video communications over bandwidth constrained ad-hoc radio networks necessitates high
compression rates. To this end, a model based video communication system that incorporates
flexible and accurate 3D modelling and reconstruction is proposed in part. Model-based video coding
(MBVC) is known to provide the highest compression rates, but usually compromises photorealism
and object detail. High compression ratios are achieved at the encoder by extracting and transmit-
ting only the parameters which describe changes to object orientation and motion within the scene.
The decoder uses the received parameters to animate reconstructed objects within the synthesised
scene. This is scene understanding rather than video compression. 3D reconstruction of objects
and scenes present at the encoder is the focus of this research.
3D Reconstruction is accomplished by utilizing the Patch-based Multi-view Stereo (PMVS) frame-
work of Yasutaka Furukawa and Jean Ponce. Surface geometry is initially represented as a sparse
set of orientated rectangular patches obtained from matching feature correspondences in the input
images. To increase reconstruction density these patches are iteratively expanded, and filtered
using visibility constraints to remove outliers. Depending on the availability of segmentation in-
formation, there are two methods for initialising a mesh model from the reconstructed patches.
The first method initialises the mesh from the object's visual hull. The second technique initialises
the mesh directly from the reconstructed patches. The resulting mesh is then refined by enforcing
patch reconstruction consistency and regularization constraints for each vertex on the mesh.
To improve robustness to outliers, two enhancements to the above framework are proposed. The
first uses photometric consistency during feature matching to increase the probability of selecting
the correct matching point first. The second approach estimates the orientation of the patch
such that its photometric discrepancy score for each of its visible images is minimised prior to
optimisation. The overall reconstruction algorithm is shown to be flexible and robust in that it can reconstruct
3D models for objects and scenes. It is able to automatically detect and discard outliers and may be
initialised by simple visual hulls. The demonstrated ability to account for surface orientation of the
patches during photometric consistency computations is a key performance criterion. Final results
show that the algorithm is capable of accurately reconstructing objects containing fine surface
details, deep concavities and regions without salient textures. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2012.
|
162 |
Research and developments of Dirac video codecTun, Myo January 2008 (has links)
In digital video compression, apart from storage, successful transmission of the compressed video data over the bandwidth limited erroneous channels is another important issue. To enable a video codec for broadcasting application, it is required to implement the corresponding coding tools (e.g. error-resilient coding, rate control etc.). They are normally non-normative parts of a video codec and hence their specifications are not defined in the standard. In Dirac as well, the original codec is optimized for storage purpose only and so, several non-normative part of the encoding tools are still required in order to be able to use in other types of application. Being the "Research and Developments of the Dirac Video Codec" as the research title, phase I of the project is mainly focused on the error-resilient transmission over a noisy channel. The error-resilient coding method used here is a simple and low complex coding scheme which provides the error-resilient transmission of the compressed video bitstream of Dirac video encoder over the packet erasure wired network. The scheme combines source and channel coding approach where error-resilient source coding is achieved by data partitioning in the wavelet transformed domain and channel coding is achieved through the application of either Rate-Compatible Punctured Convolutional (RCPC) Code or Turbo Code (TC) using un-equal error protection between header plus MV and data. The scheme is designed mainly for the packet-erasure channel, i.e. targeted for the Internet broadcasting application. But, for a bandwidth limited channel, it is still required to limit the amount of bits generated from the encoder depending on the available bandwidth in addition to the error-resilient coding. So, in the 2nd phase of the project, a rate control algorithm is presented. The algorithm is based upon the Quality Factor (QF) optimization method where QF of the encoded video is adaptively changing in order to achieve average bitrate which is constant over each Group of Picture (GOP). A relation between the bitrate, R and the QF, which is called Rate-QF (R-QF) model is derived in order to estimate the optimum QF of the current encoding frame for a given target bitrate, R. In some applications like video conferencing, real-time encoding and decoding with minimum delay is crucial, but, the ability to do real-time encoding/decoding is largely determined by the complexity of the encoder/decoder. As we all know that motion estimation process inside the encoder is the most time consuming stage. So, reducing the complexity of the motion estimation stage will certainly give one step closer to the real-time application. So, as a partial contribution toward realtime application, in the final phase of the research, a fast Motion Estimation (ME) strategy is designed and implemented. It is the combination of modified adaptive search plus semi-hierarchical way of motion estimation. The same strategy was implemented in both Dirac and H.264 in order to investigate its performance on different codecs. Together with this fast ME strategy, a method which is called partial cost function calculation in order to further reduce down the computational load of the cost function calculation was presented. The calculation is based upon the pre-defined set of patterns which were chosen in such a way that they have as much maximum coverage as possible over the whole block. In summary, this research work has contributed to the error-resilient transmission of compressed bitstreams of Dirac video encoder over a bandwidth limited error prone channel. In addition to this, the final phase of the research has partially contributed toward the real-time application of the Dirac video codec by implementing a fast motion estimation strategy together with partial cost function calculation idea.
|
163 |
A Study of Perceptually Tuned, Wavelet Based, Rate Scalable, Image and Video CompressionWei, Ming 05 1900 (has links)
In this dissertation, first, we have proposed and implemented a new perceptually tuned wavelet based, rate scalable, and color image encoding/decoding system based on the human perceptual model. It is based on state-of-the-art research on embedded wavelet image compression technique, Contrast Sensitivity Function (CSF) for Human Visual System (HVS) and extends this scheme to handle optimal bit allocation among multiple bands, such as Y, Cb, and Cr. Our experimental image codec shows very exciting results in compression performance and visual quality comparing to the new wavelet based international still image compression standard - JPEG 2000. On the other hand, our codec also shows significant better speed performance and comparable visual quality in comparison to the best codec available in rate scalable color image compression - CSPIHT that is based on Set Partition In Hierarchical Tree (SPIHT) and Karhunen-Loeve Transform (KLT). Secondly, a novel wavelet based interframe compression scheme has been developed and put into practice. It is based on the Flexible Block Wavelet Transform (FBWT) that we have developed. FBWT based interframe compression is very efficient in both compression and speed performance. The compression performance of our video codec is compared with H263+. At the same bit rate, our encoder, being comparable to the H263+ scheme, with a slightly lower (Peak Signal Noise Ratio (PSNR) value, produces a more visually pleasing result. This implementation also preserves scalability of wavelet embedded coding technique. Thirdly, the scheme to handle optimal bit allocation among color bands for still imagery has been modified and extended to accommodate the spatial-temporal sensitivity of the HVS model. The bit allocation among color bands based on Kelly's spatio-temporal CSF model is designed to achieve the perceptual optimum for human eyes. A perceptually tuned, wavelet based, rate scalable video encoding/decoding system has been designed and implemented based on this new bit allocation scheme. Finally to present the potential applications of our rate scalable video codec, a prototype system for rate scalable video streaming over the Internet has been designed and implemented to deal with the bandwidth unpredictability of the Internet.
|
164 |
Improving the Utility of Egocentric VideosBiao Ma (6848807) 15 August 2019 (has links)
<div>For either entertainment or documenting purposes, people are starting to record their life using egocentric cameras, mounted on either a person or a vehicle. Our target is to improve the utility of these egocentric videos. </div><div><br></div><div>For egocentric videos with an entertainment purpose, we aim to enhance the viewing experience to improve overall enjoyment. We focus on First-Person Videos (FPVs), which are recorded by wearable cameras. People record FPVs in order to share their First-Person Experience (FPE). However, raw FPVs are usually too shaky to watch, which ruins the experience. We explore the mechanism of human perception and propose a biometric-based measurement called the Viewing Experience (VE) score, which measures both the stability and the First-person Motion Information (FPMI) of a FPV. This enables us to further develop a system to stabilize FPVs while preserving their FPMI. Experimental results show that our system is robust and efficient in measuring and improving the VE of FPVs.</div><div><br></div><div>For egocentric videos whose goal is documentation, we aim to build a system that can centrally collect, compress and manage the videos. We focus on Dash Camera Videos (DCVs), which are used by people to document the route they drive each day. We proposed a system that can classify videos according to the route they drove using GPS information and visual information. When new DCVs are recorded, their bit-rate can be reduced by jointly compressing them with videos recorded on the similar route. Experimental results show that our system outperforms other similar solutions and the standard HEVC particularly in varying illumination.</div><div><br></div><div>The First-Person Video viewing experience topic and the Dashcam Video compression topic are two representations of applications rely on Visual Odometers (VOs): visual augmentation and robotic perception. Different applications have different requirement for VOs. And the performance of VOs are also influenced by many different factors. To help our system and other users that also work on similar applications, we further propose a system that can investigate the performance of different VOs under various factors. The proposed system is shown to be able to provide suggestion on selecting VOs based on the application.</div>
|
165 |
HEVC optimization in mobile environmentsUnknown Date (has links)
Recently, multimedia applications and their use have grown dramatically in
popularity in strong part due to mobile device adoption by the consumer market.
Applications, such as video conferencing, have gained popularity. These applications
and others have a strong video component that uses the mobile device’s resources. These
resources include processing time, network bandwidth, memory use, and battery life.
The goal is to reduce the need of these resources by reducing the complexity of the
coding process. Mobile devices offer unique characteristics that can be exploited for
optimizing video codecs. The combination of small display size, video resolution, and
human vision factors, such as acuity, allow encoder optimizations that will not (or
minimally) impact subjective quality. The focus of this dissertation is optimizing video services in mobile environments. Industry has begun migrating from H.264 video coding to a more resource intensive but compression efficient High Efficiency Video Coding (HEVC). However, there has been no proper evaluation and optimization of HEVC for mobile environments.
Subjective quality evaluations were performed to assess relative quality between H.264
and HEVC. This will allow for better use of device resources and migration to new
codecs where it is most useful. Complexity of HEVC is a significant barrier to adoption
on mobile devices and complexity reduction methods are necessary. Optimal use of
encoding options is needed to maximize quality and compression while minimizing
encoding time. Methods for optimizing coding mode selection for HEVC were
developed. Complexity of HEVC encoding can be further reduced by exploiting the
mismatch between the resolution of the video, resolution of the mobile display, and the
ability of the human eyes to acquire and process video under these conditions. The
perceptual optimizations developed in this dissertation use the properties of spatial
(visual acuity) and temporal information processing (motion perception) to reduce the
complexity of HEVC encoding. A unique feature of the proposed methods is that they
reduce encoding complexity and encoding time.
The proposed HEVC encoder optimization methods reduced encoding time by
21.7% and bitrate by 13.4% with insignificant impact on subjective quality evaluations.
These methods can easily be implemented today within HEVC. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2014. / FAU Electronic Theses and Dissertations Collection
|
166 |
Adaptation of variable-bit-rate compressed video for transport over a constant-bit-rate communication channel in broadband networks.January 1995 (has links)
by Chi-yin Tse. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 118-[121]). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Video Compression and Transport --- p.2 / Chapter 1.2 --- VBR-CBR Adaptation of Video Traffic --- p.5 / Chapter 1.3 --- Research Contributions --- p.7 / Chapter 1.3.1 --- Spatial Smoothing: Video Aggregation --- p.8 / Chapter 1.3.2 --- Temporal Smoothing: A Control-Theoretic Study。 --- p.8 / Chapter 1.4 --- Organization of Thesis --- p.9 / Chapter 2 --- Preliminaries --- p.13 / Chapter 2.1 --- MPEG Compression Scheme --- p.13 / Chapter 2.2 --- Problems of Transmitting MPEG Video --- p.17 / Chapter 2.3 --- Two-layer Coding and Transport Strategy --- p.19 / Chapter 2.3.1 --- Framework of MPEG-based Layering --- p.19 / Chapter 2.3.2 --- Transmission of GS and ES --- p.20 / Chapter 2.3.3 --- Problems of Two-layer Video Transmission --- p.20 / Chapter 3 --- Video Aggregation --- p.24 / Chapter 3.1 --- Motivation and Basic Concept of Video Aggregation --- p.25 / Chapter 3.1.1 --- Description of Video Aggregation --- p.28 / Chapter 3.2 --- MPEG Video Aggregation System --- p.29 / Chapter 3.2.1 --- Shortcomings of the MPEG Video Bundle Scenario with Two-Layer Coding and Cell-Level Multiplexing --- p.29 / Chapter 3.2.2 --- MPEG Video Aggregation --- p.31 / Chapter 3.2.3 --- MPEG Video Aggregation System Architecture --- p.33 / Chapter 3.3 --- Variations of MPEG Video Aggregation System --- p.35 / Chapter 3.4 --- Experimental Results --- p.38 / Chapter 3.4.1 --- Comparison of Video Aggregation and Cell-level Multi- plexing --- p.40 / Chapter 3.4.2 --- Varying Amount of the Allocated Bandwidth --- p.48 / Chapter 3.4.3 --- Varying Number of Sequences --- p.50 / Chapter 3.5 --- Conclusion --- p.53 / Chapter 3.6 --- Appendix: Alternative Implementation of MPEG Video Aggre- gation --- p.53 / Chapter 3.6.1 --- Profile Approach --- p.54 / Chapter 3.6.2 --- Bit-Plane Approach --- p.54 / Chapter 4 --- A Control-Theoretic Study of Video Traffic Adaptation --- p.58 / Chapter 4.1 --- Review of Previous Adaptation Schemes --- p.60 / Chapter 4.1.1 --- A Generic Model for Adaptation Scheme --- p.60 / Chapter 4.1.2 --- Objectives of Adaptation Controller --- p.61 / Chapter 4.2 --- Motivation for Control-Theoretic Study --- p.64 / Chapter 4.3 --- Linear Feedback Controller Model --- p.64 / Chapter 4.3.1 --- Encoder Model --- p.65 / Chapter 4.3.2 --- Adaptation Controller Model --- p.69 / Chapter 4.4 --- Analysis --- p.72 / Chapter 4.4.1 --- Stability --- p.73 / Chapter 4.4.2 --- Robustness against Coding-mode Switching --- p.83 / Chapter 4.4.3 --- Unit-Step Responses and Unit-Sample Responses --- p.84 / Chapter 4.5 --- Implementation --- p.91 / Chapter 4.6 --- Experimental Results --- p.95 / Chapter 4.6.1 --- Overall Performance of the Adaptation Scheme --- p.97 / Chapter 4.6.2 --- Weak-Control verus Strong-Control --- p.99 / Chapter 4.6.3 --- Varying Amount of Reserved Bandwidth --- p.101 / Chapter 4.7 --- Conclusion --- p.103 / Chapter 4.8 --- Appendix I: Further Research --- p.103 / Chapter 4.9 --- Appendix II: Review of Previous Adaptation Schemes --- p.106 / Chapter 4.9.1 --- Watanabe. et. al.'s Scheme --- p.106 / Chapter 4.9.2 --- MPEG's Scheme --- p.107 / Chapter 4.9.3 --- Lee et.al.'s Modification --- p.109 / Chapter 4.9.4 --- Chen's Adaptation Scheme --- p.110 / Chapter 5 --- Conclusion --- p.116 / Bibliography --- p.118
|
167 |
Reliable video transmission over internet.January 2000 (has links)
by Sze Ho Pong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 50-[53]). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Related Work --- p.3 / Chapter 1.2 --- Contributions of the Thesis --- p.3 / Chapter 1.3 --- Organization of the Thesis --- p.4 / Chapter 2 --- Background --- p.5 / Chapter 2.1 --- Best-effort Internet - The Lossy Network --- p.5 / Chapter 2.2 --- Effects of Packet Loss on Streamed Video --- p.7 / Chapter 2.3 --- Loss Recovery Schemes in Video Streaming --- p.8 / Chapter 3 --- Comparison of Two Packet-Loss Detection Schemes --- p.10 / Chapter 3.1 --- Gap Detection (GD) --- p.12 / Chapter 3.2 --- Time-Out (TO) Detection --- p.14 / Chapter 3.3 --- Mathematical Comparison --- p.17 / Chapter 4 --- The Combined Loss-Detection Algorithm --- p.21 / Chapter 4.1 --- System Architecture --- p.22 / Chapter 4.2 --- Loss Detection and Recovery --- p.23 / Chapter 4.2.1 --- Detecting Data Packet Losses Transmitted for First Time --- p.24 / Chapter 4.2.2 --- Detecting Losses of Retransmitted Packet --- p.28 / Chapter 4.3 --- Buffering Techniques --- p.32 / Chapter 4.3.1 --- Determining Packet-Loss Rate in Presentation --- p.33 / Chapter 4.4 --- Mapping Packet-Loss Rate to Degradation of Video Quality --- p.38 / Chapter 5 --- Experimental Results and Analysis --- p.40 / Chapter 5.1 --- Experimental Setup --- p.40 / Chapter 5.2 --- Small Delay Jitter Environment --- p.42 / Chapter 5.3 --- Large Delay Jitter Environment --- p.44 / Chapter 5.3.1 --- Using Low Bit-Rate Stream --- p.44 / Chapter 5.3.2 --- Using High Bit-Rate Stream --- p.44 / Chapter 6 --- Conclusions and Future Work --- p.47 / Chapter 6.1 --- Conclusions --- p.47 / Chapter 6.2 --- Future Work --- p.49 / Bibliography --- p.50
|
168 |
Creating virtual environment by 3D computer vision techniques.January 2000 (has links)
Lao Tze Kin Jackie. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 83-87). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- 3D Modeling using Active Contour --- p.3 / Chapter 1.2 --- Rectangular Virtual Environment Construction --- p.5 / Chapter 1.3 --- Thesis Contribution --- p.7 / Chapter 1.4 --- Thesis Outline --- p.7 / Chapter 2 --- Background --- p.9 / Chapter 2.1 --- Panoramic Representation --- p.9 / Chapter 2.1.1 --- Static Mosaic --- p.10 / Chapter 2.1.2 --- Advanced Mosaic Representation --- p.15 / Chapter 2.1.3 --- Panoramic Walkthrough --- p.17 / Chapter 2.2 --- Active Contour Model --- p.24 / Chapter 2.2.1 --- Parametric Active Contour Model --- p.28 / Chapter 2.3 --- 3D Shape Estimation --- p.29 / Chapter 2.3.1 --- Model Formation with both intrinsic and extrinsic parameters --- p.29 / Chapter 2.3.2 --- Model Formation with only Intrinsic Parameter and Epipo- lar Geometry --- p.32 / Chapter 3 --- 3D Object Modeling using Active Contour --- p.39 / Chapter 3.1 --- Point Acquisition Through Active Contour --- p.40 / Chapter 3.2 --- Object Segmentation and Panorama Generation --- p.43 / Chapter 3.2.1 --- Object Segmentation --- p.44 / Chapter 3.2.2 --- Panorama Construction --- p.44 / Chapter 3.3 --- 3D modeling and Texture Mapping --- p.45 / Chapter 3.3.1 --- Texture Mapping From Parameterization --- p.46 / Chapter 3.4 --- Experimental Results --- p.48 / Chapter 3.4.1 --- Experimental Error --- p.49 / Chapter 3.4.2 --- Comparison between Virtual 3D Model with Actual Model --- p.54 / Chapter 3.4.3 --- Comparison with Existing Techniques --- p.55 / Chapter 3.5 --- Discussion --- p.55 / Chapter 4 --- Rectangular Virtual Environment Construction --- p.57 / Chapter 4.1 --- Rectangular Environment Construction using Traditional (Hori- zontal) Panoramic Scenes --- p.58 / Chapter 4.1.1 --- Image Manipulation --- p.59 / Chapter 4.1.2 --- Panoramic Mosaic Creation --- p.59 / Chapter 4.1.3 --- Measurement of Panning Angles --- p.61 / Chapter 4.1.4 --- Estimate Side Ratio --- p.62 / Chapter 4.1.5 --- Wireframe Modeling and Cylindrical Projection --- p.63 / Chapter 4.1.6 --- Experimental Results --- p.66 / Chapter 4.2 --- Rectangular Environment Construction using Vertical Panoramic Scenes --- p.67 / Chapter 4.3 --- Building virtual environments for complex scenes --- p.73 / Chapter 4.4 --- Comparison with Existing Techniques --- p.75 / Chapter 4.5 --- Discussion and Future Directions --- p.77 / Chapter 5 --- System Integration --- p.79 / Chapter 6 --- Conclusion --- p.81 / Bibliography --- p.87
|
169 |
Parental finite state vector quantizer and vector wavelet transform-linear predictive coding.January 1998 (has links)
by Lam Chi Wah. / Thesis submitted in: December 1997. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 89-91). / Abstract also in Chinese. / Chapter Chapter 1 --- Introduction to Data Compression and Image Coding --- p.1 / Chapter 1.1 --- Introduction --- p.1 / Chapter 1.2 --- Fundamental Principle of Data Compression --- p.2 / Chapter 1.3 --- Some Data Compression Algorithms --- p.3 / Chapter 1.4 --- Image Coding Overview --- p.4 / Chapter 1.5 --- Image Transformation --- p.5 / Chapter 1.6 --- Quantization --- p.7 / Chapter 1.7 --- Lossless Coding --- p.8 / Chapter Chapter 2 --- Subband Coding and Wavelet Transform --- p.9 / Chapter 2.1 --- Subband Coding Principle --- p.9 / Chapter 2.2 --- Perfect Reconstruction --- p.11 / Chapter 2.3 --- Multi-Channel System --- p.13 / Chapter 2.4 --- Discrete Wavelet Transform --- p.13 / Chapter Chapter 3 --- Vector Quantization (VQ) --- p.16 / Chapter 3.1 --- Introduction --- p.16 / Chapter 3.2 --- Basic Vector Quantization Procedure --- p.17 / Chapter 3.3 --- Codebook Searching and the LBG Algorithm --- p.18 / Chapter 3.3.1 --- Codebook --- p.18 / Chapter 3.3.2 --- LBG Algorithm --- p.19 / Chapter 3.4 --- Problem of VQ and Variations of VQ --- p.21 / Chapter 3.4.1 --- Classified VQ (CVQ) --- p.22 / Chapter 3.4.2 --- Finite State VQ (FSVQ) --- p.23 / Chapter 3.5 --- Vector Quantization on Wavelet Coefficients --- p.24 / Chapter Chapter 4 --- Vector Wavelet Transform-Linear Predictor Coding --- p.26 / Chapter 4.1 --- Image Coding Using Wavelet Transform with Vector Quantization --- p.26 / Chapter 4.1.1 --- Future Standard --- p.26 / Chapter 4.1.2 --- Drawback of DCT --- p.27 / Chapter 4.1.3 --- "Wavelet Coding and VQ, the Future Trend" --- p.28 / Chapter 4.2 --- Mismatch between Scalar Transformation and VQ --- p.29 / Chapter 4.3 --- Vector Wavelet Transform (VWT) --- p.30 / Chapter 4.4 --- Example of Vector Wavelet Transform --- p.34 / Chapter 4.5 --- Vector Wavelet Transform - Linear Predictive Coding (VWT-LPC) --- p.36 / Chapter 4.6 --- An Example of VWT-LPC --- p.38 / Chapter Chapter 5 --- Vector Quantizaton with Inter-band Bit Allocation (IBBA) --- p.40 / Chapter 5.1 --- Bit Allocation Problem --- p.40 / Chapter 5.2 --- Bit Allocation for Wavelet Subband Vector Quantizer --- p.42 / Chapter 5.2.1 --- Multiple Codebooks --- p.42 / Chapter 5.2.2 --- Inter-band Bit Allocation (IBBA) --- p.42 / Chapter Chapter 6 --- Parental Finite State Vector Quantizers (PFSVQ) --- p.45 / Chapter 6.1 --- Introduction --- p.45 / Chapter 6.2 --- Parent-Child Relationship Between Subbands --- p.46 / Chapter 6.3 --- Wavelet Subband Vector Structures for VQ --- p.48 / Chapter 6.3.1 --- VQ on Separate Bands --- p.48 / Chapter 6.3.2 --- InterBand Information for Intraband Vectors --- p.49 / Chapter 6.3.3 --- Cross band Vector Methods --- p.50 / Chapter 6.4 --- Parental Finite State Vector Quantization Algorithms --- p.52 / Chapter 6.4.1 --- Scheme I: Parental Finite State VQ with Parent Index Equals Child Class Number --- p.52 / Chapter 6.4.2 --- Scheme II: Parental Finite State VQ with Parent Index Larger than Child Class Number --- p.55 / Chapter Chapter 7 --- Simulation Result --- p.58 / Chapter 7.1 --- Introduction --- p.58 / Chapter 7.2 --- Simulation Result of Vector Wavelet Transform (VWT) --- p.59 / Chapter 7.3 --- Simulation Result of Vector Wavelet Transform - Linear Predictive Coding (VWT-LPC) --- p.61 / Chapter 7.3.1 --- First Test --- p.61 / Chapter 7.3.2 --- Second Test --- p.61 / Chapter 7.3.3 --- Third Test --- p.61 / Chapter 7.4 --- Simulation Result of Vector Quantization Using Inter-band Bit Allocation (IBBA) --- p.62 / Chapter 7.5 --- Simulation Result of Parental Finite State Vector Quantizers (PFSVQ) --- p.63 / Chapter Chapter 8 --- Conclusion --- p.86 / REFERENCE --- p.89
|
170 |
Non-expansive symmetrically extended wavelet transform for arbitrarily shaped video object plane.January 1998 (has links)
by Lai Chun Kit. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 68-70). / Abstract also in Chinese. / ACKNOWLEDGMENTS --- p.IV / ABSTRACT --- p.v / Chapter Chapter 1 --- Traditional Image and Video Coding --- p.1 / Chapter 1.1 --- Introduction --- p.1 / Chapter 1.2 --- Fundamental Principle of Compression --- p.1 / Chapter 1.3 --- Entropy - Value of Information --- p.2 / Chapter 1.4 --- Performance Measure --- p.3 / Chapter 1.5 --- Image Coding Overview --- p.4 / Chapter 1.5.1 --- Digital Image Formation --- p.4 / Chapter 1.5.2 --- Needs of Image Compression --- p.4 / Chapter 1.5.3 --- Classification of Image Compression --- p.5 / Chapter 1.5.4 --- Transform Coding --- p.6 / Chapter 1.6 --- Video Coding Overview --- p.8 / Chapter Chapter 2 --- Discrete Wavelets Transform (DWT) and Subband Coding --- p.11 / Chapter 2.1 --- Subband Coding --- p.11 / Chapter 2.1.1 --- Introduction --- p.11 / Chapter 2.1.2 --- Quadrature Mirror Filters (QMFs) --- p.12 / Chapter 2.1.3 --- Subband Coding for Image --- p.13 / Chapter 2.2 --- Discrete Wavelets Transformation (DWT) --- p.15 / Chapter 2.2.1 --- Introduction --- p.15 / Chapter 2.2.2 --- Wavelet Theory --- p.15 / Chapter 2.2.3 --- Comparison Between Fourier Transform and Wavelet Transform --- p.16 / Chapter Chapter 3 --- Non-expansive Symmetric Extension --- p.19 / Chapter 3.1 --- Introduction --- p.19 / Chapter 3.2 --- Types of extension scheme --- p.19 / Chapter 3.3 --- Non-expansive Symmetric Extension and Symmetric Sub-sampling --- p.21 / Chapter Chapter 4 --- Content-based Video Coding in MPEG-4 Purposed Standard --- p.24 / Chapter 4.1 --- Introduction --- p.24 / Chapter 4.2 --- Motivation of the new MPEG-4 standard --- p.25 / Chapter 4.2.1 --- Changes in the production of audio-visual material --- p.25 / Chapter 4.2.2 --- Changes in the consumption of multimedia information --- p.25 / Chapter 4.2.3 --- Reuse of audio-visual material --- p.26 / Chapter 4.2.4 --- Changes in mode of implementation --- p.26 / Chapter 4.3 --- Objective of MPEG-4 standard --- p.27 / Chapter 4.4 --- Technical Description of MPEG-4 --- p.28 / Chapter 4.4.1 --- Overview of MPEG-4 coding system --- p.28 / Chapter 4.4.2 --- Shape Coding --- p.29 / Chapter 4.4.3 --- Shape Adaptive Texture Coding --- p.33 / Chapter 4.4.4 --- Motion Estimation and Compensation (ME/MC) --- p.35 / Chapter Chapter 5 --- Shape Adaptive Wavelet Transformation Coding Scheme (SA WT) --- p.36 / Chapter 5.1 --- Shape Adaptive Wavelet Transformation --- p.36 / Chapter 5.1.1 --- Introduction --- p.36 / Chapter 5.1.2 --- Description of Transformation Scheme --- p.37 / Chapter 5.2 --- Quantization --- p.40 / Chapter 5.3 --- Entropy Coding --- p.42 / Chapter 5.3.1 --- Introduction --- p.42 / Chapter 5.3.2 --- Stack Run Algorithm --- p.42 / Chapter 5.3.3 --- ZeroTree Entropy (ZTE) Coding Algorithm --- p.45 / Chapter 5.4 --- Binary Shape Coding --- p.49 / Chapter Chapter 6 --- Simulation --- p.51 / Chapter 6.1 --- Introduction --- p.51 / Chapter 6.2 --- SSAWT-Stack Run --- p.52 / Chapter 6.3 --- SSAWT-ZTR --- p.53 / Chapter 6.4 --- Simulation Results --- p.55 / Chapter 6.4.1 --- SSAWT - STACK --- p.55 / Chapter 6.4.2 --- SSAWT ´ؤ ZTE --- p.56 / Chapter 6.4.3 --- Comparison Result - Cjpeg and Wave03. --- p.57 / Chapter 6.5 --- Shape Coding Result --- p.61 / Chapter 6.6 --- Analysis --- p.63 / Chapter Chapter 7 --- Conclusion --- p.64 / Appendix A: Image Segmentation --- p.65 / Reference --- p.68
|
Page generated in 0.1113 seconds