Spelling suggestions: "subject:"video compression"" "subject:"ideo compression""
101 |
MDRS: a low complexity scheduler with deterministic performance guarantee for VBR video delivery.January 2001 (has links)
by Lai Hin Lun. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 54-57). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iv / Table of Contents --- p.v / List of Figures --- p.vii / Chapter Chapter 1 --- Introduction --- p.1 / Chapter Chapter 2 --- Related Works --- p.8 / Chapter 2.1 --- Source Modeling --- p.9 / Chapter 2.2 --- CBR Scheduler for VBR Delivery --- p.11 / Chapter 2.3 --- Brute Force Scheduler: --- p.15 / Chapter 2.4 --- Temporal Smoothing Scheduler: --- p.16 / Chapter Chapter 3 --- Decreasing Rate Scheduling --- p.22 / Chapter 3.1 --- MDRS with Minimum Buffer Requirement --- p.25 / Chapter 3.2 --- 2-Rate MDRS --- p.31 / Chapter Chapter 4 --- Performance Evaluation --- p.33 / Chapter 4.1 --- Buffer Requirement --- p.35 / Chapter 4.2 --- Startup Delay --- p.38 / Chapter 4.3 --- Disk Utilization --- p.39 / Chapter 4.4 --- Complexity --- p.43 / Chapter Chapter 5 --- Conclusion --- p.49 / Appendix --- p.51 / Bibliography --- p.54
|
102 |
Image motion estimation for 3D model based video conferencing.January 2000 (has links)
Cheung Man-kin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 116-120). / Abstracts in English and Chinese. / Chapter 1) --- Introduction --- p.1 / Chapter 1.1) --- Building of the 3D Wireframe and Facial Model --- p.2 / Chapter 1.2) --- Description of 3D Model Based Video Conferencing --- p.3 / Chapter 1.3) --- Wireframe Model Fitting or Conformation --- p.6 / Chapter 1.4) --- Pose Estimation --- p.8 / Chapter 1.5) --- Facial Motion Estimation and Synthesis --- p.9 / Chapter 1.6) --- Thesis Outline --- p.10 / Chapter 2) --- Wireframe model Fitting --- p.11 / Chapter 2.1) --- Algorithm of WFM Fitting --- p.12 / Chapter 2.1.1) --- Global Deformation --- p.14 / Chapter a) --- Scaling --- p.14 / Chapter b) --- Shifting --- p.15 / Chapter 2.1.2) --- Local Deformation --- p.15 / Chapter a) --- Shifting --- p.16 / Chapter b) --- Scaling --- p.17 / Chapter 2.1.3) --- Fine Updating --- p.17 / Chapter 2.2) --- Steps of Fitting --- p.18 / Chapter 2.3) --- Functions of Different Deformation --- p.18 / Chapter 2.4) --- Experimental Results --- p.19 / Chapter 2.4.1) --- Output wireframe in each step --- p.19 / Chapter 2.4.2) --- Examples of Mis-fitted wireframe with incoming image --- p.22 / Chapter 2.4.3) --- Fitted 3D facial wireframe --- p.23 / Chapter 2.4.4) --- Effect of mis-fitted wireframe after compensation of motion --- p.24 / Chapter 2.5) --- Summary --- p.26 / Chapter 3) --- Epipolar Geometry --- p.27 / Chapter 3.1) --- Pinhole Camera Model and Perspective Projection --- p.28 / Chapter 3.2) --- Concepts in Epipolar Geometry --- p.31 / Chapter 3.2.1) --- Working with normalized image coordinates --- p.33 / Chapter 3.2.2) --- Working with pixel image coordinates --- p.35 / Chapter 3.2.3) --- Summary --- p.37 / Chapter 3.3) --- 8-point Algorithm (Essential and Fundamental Matrix) --- p.38 / Chapter 3.3.1) --- Outline of the 8-point algorithm --- p.38 / Chapter 3.3.2) --- Modification on obtained Fundamental Matrix --- p.39 / Chapter 3.3.3) --- Transformation of Image Coordinates --- p.40 / Chapter a) --- Translation to mean of points --- p.40 / Chapter b) --- Normalizing transformation --- p.41 / Chapter 3.3.4) --- Summary of 8-point algorithm --- p.41 / Chapter 3.4) --- Estimation of Object Position by Decomposition of Essential Matrix --- p.43 / Chapter 3.4.1) --- Algorithm Derivation --- p.43 / Chapter 3.4.2) --- Algorithm Outline --- p.46 / Chapter 3.5) --- Noise Sensitivity --- p.48 / Chapter 3.5.1) --- Rotation vector of model --- p.48 / Chapter 3.5.2) --- The projection of rotated model --- p.49 / Chapter 3.5.3) --- Noisy image --- p.51 / Chapter 3.5.4) --- Summary --- p.51 / Chapter 4) --- Pose Estimation --- p.54 / Chapter 4.1) --- Linear Method --- p.55 / Chapter 4.1.1) --- Theory --- p.55 / Chapter 4.1.2) --- Normalization --- p.57 / Chapter 4.1.3) --- Experimental Results --- p.58 / Chapter a) --- Synthesized image by linear method without normalization --- p.58 / Chapter b) --- Performance between linear method with and without normalization --- p.60 / Chapter c) --- Performance of linear method under quantization noise with different transformation components --- p.62 / Chapter d) --- Performance of normalized case without transformation in z- component --- p.63 / Chapter 4.1.4) --- Summary --- p.64 / Chapter 4.2) --- Two Stage Algorithm --- p.66 / Chapter 4.2.1) --- Introduction --- p.66 / Chapter 4.2.2) --- The Two Stage Algorithm --- p.67 / Chapter a) --- Stage 1 (Iterative Method) --- p.68 / Chapter b) --- Stage 2 ( Non-linear Optimization) --- p.71 / Chapter 4.2.3) --- Summary of the Two Stage Algorithm --- p.72 / Chapter 4.2.4) --- Experimental Results --- p.72 / Chapter 4.2.5) --- Summary --- p.80 / Chapter 5) --- Facial Motion Estimation and Synthesis --- p.81 / Chapter 5.1) --- Facial Expression based on face muscles --- p.83 / Chapter 5.1.1) --- Review of Action Unit Approach --- p.83 / Chapter 5.1.2) --- Distribution of Motion Unit --- p.85 / Chapter 5.1.3) --- Algorithm --- p.89 / Chapter a) --- For Unidirectional Motion Unit --- p.89 / Chapter b) --- For Circular Motion Unit (eyes) --- p.90 / Chapter c) --- For Another Circular Motion Unit (mouth) --- p.90 / Chapter 5.1.4) --- Experimental Results --- p.91 / Chapter 5.1.5) --- Summary --- p.95 / Chapter 5.2) --- Detection of Facial Expression by Muscle-based Approach --- p.96 / Chapter 5.2.1) --- Theory --- p.96 / Chapter 5.2.2) --- Algorithm --- p.97 / Chapter a) --- For Sheet Muscle --- p.97 / Chapter b) --- For Circular Muscle --- p.98 / Chapter c) --- For Mouth Muscle --- p.99 / Chapter 5.2.3) --- Steps of Algorithm --- p.100 / Chapter 5.2.4) --- Experimental Results --- p.101 / Chapter 5.2.5) --- Summary --- p.103 / Chapter 6) --- Conclusion --- p.104 / Chapter 6.1) --- WFM fitting --- p.104 / Chapter 6.2) --- Pose Estimation --- p.105 / Chapter 6.3) --- Facial Estimation and Synthesis --- p.106 / Chapter 6.4) --- Discussion on Future Improvements --- p.107 / Chapter 6.4.1) --- WFM Fitting --- p.107 / Chapter 6.4.2) --- Pose Estimation --- p.109 / Chapter 6.4.3) --- Facial Motion Estimation and Synthesis --- p.110 / Chapter 7) --- Appendix --- p.111 / Chapter 7.1) --- Newton's Method or Newton-Raphson Method --- p.111 / Chapter 7.2) --- H.261 --- p.113 / Chapter 7.3) --- 3D Measurement --- p.114 / Bibliography --- p.116
|
103 |
Foreground/background video coding for video conferencing =: 應用於視訊會議之前景/後景視訊編碼. / 應用於視訊會議之前景/後景視訊編碼 / Foreground/background video coding for video conferencing =: Ying yong yu shi xun hui yi zhi qian jing/ hou jing shi xun bian ma. / Ying yong yu shi xun hui yi zhi qian jing/ hou jing shi xun bian maJanuary 2002 (has links)
Lee Kar Kin Edwin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 129-134). / Text in English; abstracts in English and Chinese. / Lee Kar Kin Edwin. / Acknowledgement --- p.ii / Abstract --- p.iii / Contents --- p.vii / List of Figures --- p.ix / List of Tables --- p.xiii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- A brief review of transform-based video coding --- p.1 / Chapter 1.2 --- A brief review of content-based video coding --- p.6 / Chapter 1.3 --- Objectives of the research work --- p.9 / Chapter 1.4 --- Thesis outline --- p.12 / Chapter 2 --- Incorporation of DC Coefficient Restoration into Foreground/Background coding --- p.13 / Chapter 2.1 --- Introduction --- p.13 / Chapter 2.2 --- A review of FB coding in H.263 sequence --- p.15 / Chapter 2.3 --- A review of DCCR --- p.18 / Chapter 2.4 --- DCCRFB coding --- p.23 / Chapter 2.4.1 --- Methodology --- p.23 / Chapter 2.4.2 --- Implementation --- p.24 / Chapter 2.4.3 --- Experimental results --- p.26 / Chapter 2.5 --- The use of block selection scheme in DCCRFB coding --- p.32 / Chapter 2.5.1 --- Introduction --- p.32 / Chapter 2.5.2 --- Experimental results --- p.34 / Chapter 2.6 --- Summary --- p.47 / Chapter 3 --- Chin contour estimation on foreground human faces --- p.48 / Chapter 3.1 --- Introduction --- p.48 / Chapter 3.2 --- Least mean square estimation of chin location --- p.50 / Chapter 3.3 --- Chin contour estimation using chin edge detector and contour modeling --- p.58 / Chapter 3.3.1 --- Face segmentation and facial organ extraction --- p.59 / Chapter 3.3.2 --- Identification of search window --- p.59 / Chapter 3.3.3 --- Edge detection using chin edge detector --- p.60 / Chapter 3.3.4 --- "Determination of C0, C1 and c2" --- p.63 / Chapter 3.3.5 --- Chin contour modeling --- p.67 / Chapter 3.4 --- Experimental results --- p.71 / Chapter 3.5 --- Summary --- p.77 / Chapter 4 --- Wire-frame model deformation and face animation using FAP --- p.78 / Chapter 4.1 --- Introduction --- p.78 / Chapter 4.2 --- Wire-frame face model deformation --- p.79 / Chapter 4.2.1 --- Introduction --- p.79 / Chapter 4.2.2 --- Wire-frame model selection and FDP generation --- p.81 / Chapter 4.2.3 --- Global deformation --- p.85 / Chapter 4.2.4 --- Local deformation --- p.87 / Chapter 4.2.5 --- Experimental results --- p.93 / Chapter 4.3 --- Face animation using FAP --- p.98 / Chapter 4.3.1 --- Introduction and methodology --- p.98 / Chapter 4.3.2 --- Experiments --- p.102 / Chapter 4.4 --- Summary --- p.112 / Chapter 5 --- Conclusions and future developments --- p.113 / Chapter 5.1 --- Contributions and conclusions --- p.113 / Chapter 5.2 --- Future developments --- p.117 / Appendix A H.263 bitstream syntax --- p.122 / Appendix B Excerpt of the FAP specification table [17] --- p.123 / Bibliography --- p.129
|
104 |
Robust and efficient techniques for automatic video segmentation.January 1998 (has links)
by Lam Cheung Fai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 174-179). / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Problem Definition --- p.2 / Chapter 1.2 --- Motivation --- p.5 / Chapter 1.3 --- Problems --- p.7 / Chapter 1.3.1 --- Illumination Changes and Motions in Videos --- p.7 / Chapter 1.3.2 --- Variations in Video Scene Characteristics --- p.8 / Chapter 1.3.3 --- High Complexity of Algorithms --- p.10 / Chapter 1.3.4 --- Heterogeneous Approaches to Video Segmentation --- p.10 / Chapter 1.4 --- Objectives and Approaches --- p.11 / Chapter 1.5 --- Organization of the Thesis --- p.13 / Chapter 2 --- Related Work --- p.15 / Chapter 2.1 --- Algorithms for Uncompressed Videos --- p.16 / Chapter 2.1.1 --- Pixel-based Method --- p.16 / Chapter 2.1.2 --- Histogram-based Method --- p.17 / Chapter 2.1.3 --- Motion-based Algorithms --- p.18 / Chapter 2.1.4 --- Color-ratio Based Algorithms --- p.18 / Chapter 2.2 --- Algorithms for Compressed Videos --- p.19 / Chapter 2.2.1 --- Algorithms based on JPEG Image Sequences --- p.19 / Chapter 2.2.2 --- Algorithms based on MPEG Videos --- p.20 / Chapter 2.2.3 --- Algorithms based on VQ Compressed Videos --- p.21 / Chapter 2.3 --- Frame Difference Analysis Methods --- p.21 / Chapter 2.3.1 --- Scene Cut Detection --- p.21 / Chapter 2.3.2 --- Gradual Transition Detection --- p.22 / Chapter 2.4 --- Speedup Techniques --- p.23 / Chapter 2.5 --- Other Approaches --- p.24 / Chapter 3 --- Analysis and Enhancement of Existing Algorithms --- p.25 / Chapter 3.1 --- Introduction --- p.25 / Chapter 3.2 --- Video Segmentation Algorithms --- p.26 / Chapter 3.2.1 --- Frame Difference Metrics --- p.26 / Chapter 3.2.2 --- Frame Difference Analysis Methods --- p.29 / Chapter 3.3 --- Analysis of Feature Extraction Algorithms --- p.30 / Chapter 3.3.1 --- Pair-wise pixel comparison --- p.30 / Chapter 3.3.2 --- Color histogram comparison --- p.34 / Chapter 3.3.3 --- Pair-wise block-based comparison of DCT coefficients --- p.38 / Chapter 3.3.4 --- Pair-wise pixel comparison of DC-images --- p.42 / Chapter 3.4 --- Analysis of Scene Change Detection Methods --- p.45 / Chapter 3.4.1 --- Global Threshold Method --- p.45 / Chapter 3.4.2 --- Sliding Window Method --- p.46 / Chapter 3.5 --- Enhancements and Modifications --- p.47 / Chapter 3.5.1 --- Histogram Equalization --- p.49 / Chapter 3.5.2 --- DD Method --- p.52 / Chapter 3.5.3 --- LA Method --- p.56 / Chapter 3.5.4 --- Modification for pair-wise pixel comparison --- p.57 / Chapter 3.5.5 --- Modification for pair-wise DCT block comparison --- p.61 / Chapter 3.6 --- Conclusion --- p.69 / Chapter 4 --- Color Difference Histogram --- p.72 / Chapter 4.1 --- Introduction --- p.72 / Chapter 4.2 --- Color Difference Histogram --- p.73 / Chapter 4.2.1 --- Definition of Color Difference Histogram --- p.73 / Chapter 4.2.2 --- Sparse Distribution of CDH --- p.76 / Chapter 4.2.3 --- Resolution of CDH --- p.77 / Chapter 4.2.4 --- CDH-based Inter-frame Similarity Measure --- p.77 / Chapter 4.2.5 --- Computational Cost and Discriminating Power --- p.80 / Chapter 4.2.6 --- Suitability in Scene Change Detection --- p.83 / Chapter 4.3 --- Insensitivity to Illumination Changes --- p.89 / Chapter 4.3.1 --- Sensitivity of CDH --- p.90 / Chapter 4.3.2 --- Comparison with other feature extraction algorithms --- p.93 / Chapter 4.4 --- Orientation and Motion Invariant --- p.96 / Chapter 4.4.1 --- Camera Movements --- p.97 / Chapter 4.4.2 --- Object Motion --- p.100 / Chapter 4.4.3 --- Comparison with other feature extraction algorithms --- p.100 / Chapter 4.5 --- Performance of Scene Cut Detection --- p.102 / Chapter 4.6 --- Time Complexity Comparison --- p.105 / Chapter 4.7 --- Extension to DCT-compressed Images --- p.106 / Chapter 4.7.1 --- Performance of scene cut detection --- p.108 / Chapter 4.8 --- Conclusion --- p.109 / Chapter 5 --- Scene Change Detection --- p.111 / Chapter 5.1 --- Introduction --- p.111 / Chapter 5.2 --- Previous Approaches --- p.112 / Chapter 5.2.1 --- Scene Cut Detection --- p.112 / Chapter 5.2.2 --- Gradual Transition Detection --- p.115 / Chapter 5.3 --- DD Method --- p.116 / Chapter 5.3.1 --- Detecting Scene Cuts --- p.117 / Chapter 5.3.2 --- Detecting 1-frame Transitions --- p.121 / Chapter 5.3.3 --- Detecting Gradual Transitions --- p.129 / Chapter 5.4 --- Local Thresholding --- p.131 / Chapter 5.5 --- Experimental Results --- p.134 / Chapter 5.5.1 --- Performance of CDH+DD and CDH+DL --- p.135 / Chapter 5.5.2 --- Performance of DD on other features --- p.144 / Chapter 5.6 --- Conclusion --- p.150 / Chapter 6 --- Motion Vector Based Approach --- p.151 / Chapter 6.1 --- Introduction --- p.151 / Chapter 6.2 --- Previous Approaches --- p.152 / Chapter 6.3 --- MPEG-I Video Stream Format --- p.153 / Chapter 6.4 --- Derivation of Frame Differences from Motion Vector Counts --- p.156 / Chapter 6.4.1 --- Types of Frame Pairs --- p.156 / Chapter 6.4.2 --- Conditions for Scene Changes --- p.157 / Chapter 6.4.3 --- Frame Difference Measure --- p.159 / Chapter 6.5 --- Experiment --- p.160 / Chapter 6.5.1 --- Performance of MV --- p.161 / Chapter 6.5.2 --- Performance Enhancement --- p.162 / Chapter 6.5.3 --- Limitations --- p.163 / Chapter 6.6 --- Conclusion --- p.164 / Chapter 7 --- Conclusion and Future Work --- p.165 / Chapter 7.1 --- Contributions --- p.165 / Chapter 7.2 --- Future Work --- p.169 / Chapter 7.3 --- Conclusion --- p.171 / Bibliography --- p.174 / Chapter A --- Sample Videos --- p.180 / Chapter B --- List of Abbreviations --- p.183
|
105 |
Motion estimation and segmentation. / CUHK electronic theses & dissertations collectionJanuary 2008 (has links)
Based on the fixed block size FWS algorithm, we further proposed a fast full-pel variable block size motion estimation algorithm called Fast Walsh Search in Variable Block Size (FWS-VBS). As in FWS, FWS-VBS employs the PSAD as the error measure to identify likely mismatches. Mismatches are rejected by thresholding method and the thresholds are determined adaptively to cater for different activity levels in each block. Early termination techniques are employed to further reduce the number of candidates and modes to be searched of each block. FWS-VBS performs equally well to the exhaustive full search algorithm in the reference H.264/AVC encoder and requires only about 10% of the computation time. / Furthermore, we modified our proposed segmentation algorithm to handle video sequences that are already encoded in the H.264 format. Since the video is compressed, no spatial information is available. Instead, quantized transform coefficients of the residual frame are used to approximate spatial information and improve segmentation result. The computation time of the segmentation process is merely about 16ms per frame for CIF frame size video, allowing the algorithm to be applied in real-time applications such as video surveillance and conferencing. / In the first part of our research, we proposed a block matching algorithm called Fast Walsh Search (FWS) for video motion estimation. FWS employs two new error measures defined in Walsh Hadamard domain, which are partial sum-of-absolute difference (PSAD) and sum-of-absolute difference of DC coefficients (SADDCC). The algorithm first rejects most mismatched candidates using PSAD which is a coarse measure requiring little computation. Because of the energy packing ability of Walsh Hadamard transform (WHT) and the utilization of fast WHT computation algorithm, mismatched candidates are identified and rejected efficiently. Then the proposed algorithm identifies the matched candidate from the remaining candidates using SADDCC which is a more accurate measure and can reuse computation performed for PSAD. Experimental results show that FWS can give good visual quality to most of video scene with a reasonable amount of computation. / In the second part of our research, we developed a real-time video object segmentation algorithm. The motion information is obtained by FWS-VBS to minimize the computation time while maintaining an adequate accuracy. The algorithm makes use of the motion information to identify background motion model and moving objects. In order to preserve spatial and temporal continuity of objects, Markov random field (MRF) is used to model the foreground field. The block-based foreground object mask is obtained by minimizing the energy function of the MRF. The resulting object mask is then post-processed to generate a smooth object mask. Experimental results show that the proposed algorithm can effectively extract moving objects from different kind of sequences, at a speed of less than 100ms per frame for CIF frame size video. / Motion estimation is an important part in many video processing applications, such as video compression, object segmentation, and scene analysis. In all video compression applications, motion information is used to reduce temporal redundancy between frames, thus significantly reduce the required bitrate for transmission and storage of compressed video. In addition, in object-based video coding, video object can be automatically identified by its motion against the background. / Mak, Chun Man. / "June 2008." / Adviser: Wai-Kuen Cham. / Source: Dissertation Abstracts International, Volume: 70-03, Section: B, page: 1849. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references. / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
|
106 |
Efficient and perceptual picture coding techniques. / CUHK electronic theses & dissertations collectionJanuary 2009 (has links)
In the first part, some efficient algorithms are proposed to reduce the complexity of H.264 encoder, which is the latest state-of-the-art video coding standard. Intra and Inter mode decision play a vital role in H.264 encoder and can reduce the spatial and temporal redundancy significantly, but the computational cost is also high. Here, a fast Intra mode decision algorithm and a fast Inter mode decision algorithm are proposed. Experimental results show that the proposed algorithms not only save a lot of computational cost, but also maintain coding performance quite well. Moreover, a real time H.264 baseline codec is implemented on mobile device. Based on our real time H.264 codec, an H.264 based mobile video conferencing system is achieved. / The objective of this thesis is to develop some efficient and perceptual image and video coding techniques. Two parts of the work are investigated in this thesis. / The second part of this thesis investigates two kinds of perceptual picture coding techniques. One is the just noticeable distortion (JND) based picture coding. Firstly, a DCT based spatio-temporal JND model is proposed, which is an efficient model to represent the perceptual redundancies existing in images and is consistent with the human visual system (HVS) characteristic. Secondly, the proposed JND model is incorporated into image and video coding to improve the perceptual quality. Based on the JND model, a transparent image coder and a perceptually optimized H.264 video coder are implemented. Another technique is the image compression scheme based on the recent advances in texture synthesis. In this part, an image compression scheme is proposed with the perceptual visual quality as the performance criterion instead of the pixel-wise fidelity. As demonstrated in extensive experiments, the proposed techniques can improve the perceptual quality of picture coding significantly. / Wei Zhenyu. / Adviser: Ngan Ngi. / Source: Dissertation Abstracts International, Volume: 73-01, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 148-154). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
|
107 |
Novel error resilient techniques for the robust transport of MPEG-4 video over error-prone networks. / CUHK electronic theses & dissertations collectionJanuary 2004 (has links)
Bo Yan. / "May 2004." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (p. 117-131). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
|
108 |
Arbitrary block-size transform video coding. / CUHK electronic theses & dissertations collectionJanuary 2011 (has links)
Besides ABT with higher order transform, a transform based template matching is also investigated. A fast method of template matching, called Fast Walsh Search, is developed. This search method has similar accuracy as exhaustive search but significantly lower computation requirement. / In this thesis, the development of simple but efficient order-16 transforms will be shown. Analysis and comparison with existing order-16 transforms have been carried out. The proposed order-16 transforms were integrated to the existing coding standard reference software individually so as to achieve a new ABT system. In the proposed ABT system, order-4, order-8 and order-16 transforms coexist. The selection of the most appropriate transform is based on the rate-distortion performance of these transforms. A remarkable improvement in coding performance is shown in the experiment results. A significant bit rate reduction can be achieved with our proposed ABT system with both subjective and objective qualities remain unchanged. / Prior knowledge of the coefficient distribution is a key to achieve better coding performance. This is very useful in many areas in coding such as rate control, rate distortion optimization, etc. It is also shown that coefficient distribution of predicted residue is closer to Cauchy distribution rather than traditionally expected Laplace distribution. This can effectively improve the existing processing techniques. / Three kinds of order-l 6 orthogonal DCT-like integer transforms are proposed in this thesis. The first one is the simple integer transform, which is expanded from existing order-8 ICT. The second one is the hybrid integer transform from the Dyadic Weighted Walsh Transform (DWWT). It is shown that it has a better performance than simple integer transform. The last one is a recursive transform. Order-2N transform can be derived from order-N one. It is very close to the DCT. This recursive transform can be implemented in two different ways and they are denoted as LLMICT and CSFICT. They have excellent coding performance. These proposed transforms are investigated and are implemented into the reference software of H.264 and AVS. They are also compared with other order-16 orthogonal integer transform. Experimental results show that the proposed transforms give excellent coding performance and ease to compute. / Transform is a very important coding tool in video coding. It decorrelates the pixel data and removes the redundancy among pixels so as to achieve compression. Traditionally, order-S transform is used in video and image coding. Latest video coding standards, such as H.264/AVC, adopt both order-4 and order-8 transforms. The adaptive use of more than one transforms of different sizes is known as Arbitrary Block-size Transform (ABT). Transforms other than order-4 and order-8 can also be used in ABT. It is expected larger transform size such as order-16 will benefit more in video sequences with higher resolutions such as nap and 1a8ap sequences. As a result, order-16 transform is introduced into ABT system. / Fong, Chi Keung. / Adviser: Wai Kuen Cham. / Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references. / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
|
109 |
Three dimensional DCT based video compression.January 1997 (has links)
by Chan Kwong Wing Raymond. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 115-123). / Acknowledgments --- p.i / Table of Contents --- p.ii-v / List of Tables --- p.vi / List of Figures --- p.vii / Abstract --- p.1 / Chapter Chapter 1 : --- Introduction / Chapter 1.1 --- An Introduction to Video Compression --- p.3 / Chapter 1.2 --- Overview of Problems --- p.4 / Chapter 1.2.1 --- Analog Video and Digital Problems --- p.4 / Chapter 1.2.2 --- Low Bit Rate Application Problems --- p.4 / Chapter 1.2.3 --- Real Time Video Compression Problems --- p.5 / Chapter 1.2.4 --- Source Coding and Channel Coding Problems --- p.6 / Chapter 1.2.5 --- Bit-rate and Quality Problems --- p.7 / Chapter 1.3 --- Organization of the Thesis --- p.7 / Chapter Chapter 2 : --- Background and Related Work / Chapter 2.1 --- Introduction --- p.9 / Chapter 2.1.1 --- Analog Video --- p.9 / Chapter 2.1.2 --- Digital Video --- p.10 / Chapter 2.1.3 --- Color Theory --- p.10 / Chapter 2.2 --- Video Coding --- p.12 / Chapter 2.2.1 --- Predictive Coding --- p.12 / Chapter 2.2.2 --- Vector Quantization --- p.12 / Chapter 2.2.3 --- Subband Coding --- p.13 / Chapter 2.2.4 --- Transform Coding --- p.14 / Chapter 2.2.5 --- Hybrid Coding --- p.14 / Chapter 2.3 --- Transform Coding --- p.15 / Chapter 2.3.1 --- Discrete Cosine Transform --- p.16 / Chapter 2.3.1.1 --- 1-D Fast Algorithms --- p.16 / Chapter 2.3.1.2 --- 2-D Fast Algorithms --- p.17 / Chapter 2.3.1.3 --- Multidimensional DCT Algorithms --- p.17 / Chapter 2.3.2 --- Quantization --- p.18 / Chapter 2.3.3 --- Entropy Coding --- p.18 / Chapter 2.3.3.1 --- Huffman Coding --- p.19 / Chapter 2.3.3.2 --- Arithmetic Coding --- p.19 / Chapter Chapter 3 : --- Existing Compression Scheme / Chapter 3.1 --- Introduction --- p.20 / Chapter 3.2 --- Motion JPEG --- p.20 / Chapter 3.3 --- MPEG --- p.20 / Chapter 3.4 --- H.261 --- p.22 / Chapter 3.5 --- Other Techniques --- p.23 / Chapter 3.5.1 --- Fractals --- p.23 / Chapter 3.5.2 --- Wavelets --- p.23 / Chapter 3.6 --- Proposed Solution --- p.24 / Chapter 3.7 --- Summary --- p.25 / Chapter Chapter 4 : --- Fast 3D-DCT Algorithms / Chapter 4.1 --- Introduction --- p.27 / Chapter 4.1.1 --- Motivation --- p.27 / Chapter 4.1.2 --- Potentials of 3D DCT --- p.28 / Chapter 4.2 --- Three Dimensional Discrete Cosine Transform (3D-DCT) --- p.29 / Chapter 4.2.1 --- Inverse 3D-DCT --- p.29 / Chapter 4.2.2 --- Forward 3D-DCT --- p.30 / Chapter 4.3 --- 3-D FCT (3-D Fast Cosine Transform Algorithm --- p.30 / Chapter 4.3.1 --- Partitioning and Rearrangement of Data Cube --- p.30 / Chapter 4.3.1.1 --- Spatio-temporal Data Cube --- p.30 / Chapter 4.3.1.2 --- Spatio-temporal Transform Domain Cube --- p.31 / Chapter 4.3.1.3 --- Coefficient Matrices --- p.31 / Chapter 4.3.2 --- 3-D Inverse Fast Cosine Transform (3-D IFCT) --- p.32 / Chapter 4.3.2.1 --- Matrix Representations --- p.32 / Chapter 4.3.2.2 --- Simplification of the calculation steps --- p.33 / Chapter 4.3.3 --- 3-D Forward Fast Cosine Transform (3-D FCT) --- p.35 / Chapter 4.3.3.1 --- Decomposition --- p.35 / Chapter 4.3.3.2 --- Reconstruction --- p.36 / Chapter 4.4 --- The Fast Algorithm --- p.36 / Chapter 4.5 --- Example using 4x4x4 IFCT --- p.38 / Chapter 4.6 --- Complexity Comparison --- p.43 / Chapter 4.6.1 --- Complexity of Multiplications --- p.43 / Chapter 4.6.2 --- Complexity of Additions --- p.43 / Chapter 4.7 --- Implementation Issues --- p.44 / Chapter 4.8 --- Summary --- p.46 / Chapter Chapter 5 : --- Quantization / Chapter 5.1 --- Introduction --- p.49 / Chapter 5.2 --- Dynamic Ranges of 3D-DCT Coefficients --- p.49 / Chapter 5.3 --- Distribution of 3D-DCT AC Coefficients --- p.54 / Chapter 5.4 --- Quantization Volume --- p.55 / Chapter 5.4.1 --- Shifted Complement Hyperboloid --- p.55 / Chapter 5.4.2 --- Quantization Volume --- p.58 / Chapter 5.5 --- Scan Order for Quantized 3D-DCT Coefficients --- p.59 / Chapter 5.6 --- Finding Parameter Values --- p.60 / Chapter 5.7 --- Experimental Results from Using the Proposed Quantization Values --- p.65 / Chapter 5.8 --- Summary --- p.66 / Chapter Chapter 6 : --- Entropy Coding / Chapter 6.1 --- Introduction --- p.69 / Chapter 6.1.1 --- Huffman Coding --- p.69 / Chapter 6.1.2 --- Arithmetic Coding --- p.71 / Chapter 6.2 --- Zero Run-Length Encoding --- p.73 / Chapter 6.2.1 --- Variable Length Coding in JPEG --- p.74 / Chapter 6.2.1.1 --- Coding of the DC Coefficients --- p.74 / Chapter 6.2.1.2 --- Coding of the DC Coefficients --- p.75 / Chapter 6.2.2 --- Run-Level Encoding of the Quantized 3D-DCT Coefficients --- p.76 / Chapter 6.3 --- Frequency Analysis of the Run-Length Patterns --- p.76 / Chapter 6.3.1 --- The Frequency Distributions of the DC Coefficients --- p.77 / Chapter 6.3.2 --- The Frequency Distributions of the DC Coefficients --- p.77 / Chapter 6.4 --- Huffman Table Design --- p.84 / Chapter 6.4.1 --- DC Huffman Table --- p.84 / Chapter 6.4.2 --- AC Huffman Table --- p.85 / Chapter 6.5 --- Implementation Issue --- p.85 / Chapter 6.5.1 --- Get Category --- p.85 / Chapter 6.5.2 --- Huffman Encode --- p.86 / Chapter 6.5.3 --- Huffman Decode --- p.86 / Chapter 6.5.4 --- PutBits --- p.88 / Chapter 6.5.5 --- GetBits --- p.90 / Chapter Chapter 7 : --- "Contributions, Concluding Remarks and Future Work" / Chapter 7.1 --- Contributions --- p.92 / Chapter 7.2 --- Concluding Remarks --- p.93 / Chapter 7.2.1 --- The Advantages of 3D DCT codec --- p.94 / Chapter 7.2.2 --- Experimental Results --- p.95 / Chapter 7.1 --- Future Work --- p.95 / Chapter 7.2.1 --- Integer Discrete Cosine Transform Algorithms --- p.95 / Chapter 7.2.2 --- Adaptive Quantization Volume --- p.96 / Chapter 7.2.3 --- Adaptive Huffman Tables --- p.96 / Appendices: / Appendix A : The detailed steps in the simplification of Equation 4.29 --- p.98 / Appendix B : The program Listing of the Fast DCT Algorithms --- p.101 / Appendix C : Tables to Illustrate the Reording of the Quantized Coefficients --- p.110 / Appendix D : Sample Values of the Quantization Volume --- p.111 / Appendix E : A 16-bit VLC table for AC Run-Level Pairs --- p.113 / References --- p.115
|
110 |
On design of a scalable video data placement strategy for supporting a load balancing video-on-demand storage server.January 1997 (has links)
by Kelvin Kwok-wai Law. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 66-68). / Abstract --- p.i / Acknowledgments --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Motivation --- p.2 / Chapter 1.3 --- Scope --- p.3 / Chapter 1.4 --- Dissertation Outline --- p.4 / Chapter 2 --- Background and Related Researches --- p.6 / Chapter 2.1 --- Interactive Services --- p.6 / Chapter 2.2 --- VOD Architecture --- p.7 / Chapter 2.3 --- Video Compression --- p.10 / Chapter 2.3.1 --- DCT Based Compression --- p.11 / Chapter 2.3.2 --- Subband Video Compression --- p.12 / Chapter 2.4 --- Related Research --- p.14 / Chapter 3 --- Multiple Resolutions Video File System --- p.16 / Chapter 3.1 --- Physical Disk Storage System --- p.16 / Chapter 3.2 --- Multi-resolution Video Data Placement Scheme --- p.17 / Chapter 3.3 --- Example of our Video Block Assignment Algorithm --- p.23 / Chapter 3.4 --- An Assignment Algorithm for Homogeneous Video Files --- p.26 / Chapter 4 --- Disk Scheduling and Admission Control --- p.33 / Chapter 4.1 --- Disk Scheduling Algorithm --- p.33 / Chapter 4.2 --- Admission Control --- p.40 / Chapter 5 --- Load Balancing of the Disk System --- p.43 / Chapter 6 --- Buffer Management --- p.49 / Chapter 6.1 --- Buffer Organization --- p.49 / Chapter 6.2 --- Buffer Requirement For Different Video Playback Mode --- p.51 / Chapter 7 --- Conclusions --- p.63 / Bibliography --- p.66
|
Page generated in 0.0697 seconds