• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2973
  • 861
  • 395
  • 288
  • 283
  • 238
  • 207
  • 113
  • 57
  • 52
  • 38
  • 37
  • 34
  • 31
  • 31
  • Tagged with
  • 6708
  • 1046
  • 998
  • 728
  • 612
  • 575
  • 567
  • 513
  • 460
  • 451
  • 449
  • 447
  • 436
  • 410
  • 407
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Foreground/background video coding for video conferencing =: 應用於視訊會議之前景/後景視訊編碼. / 應用於視訊會議之前景/後景視訊編碼 / Foreground/background video coding for video conferencing =: Ying yong yu shi xun hui yi zhi qian jing/ hou jing shi xun bian ma. / Ying yong yu shi xun hui yi zhi qian jing/ hou jing shi xun bian ma

January 2002 (has links)
Lee Kar Kin Edwin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 129-134). / Text in English; abstracts in English and Chinese. / Lee Kar Kin Edwin. / Acknowledgement --- p.ii / Abstract --- p.iii / Contents --- p.vii / List of Figures --- p.ix / List of Tables --- p.xiii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- A brief review of transform-based video coding --- p.1 / Chapter 1.2 --- A brief review of content-based video coding --- p.6 / Chapter 1.3 --- Objectives of the research work --- p.9 / Chapter 1.4 --- Thesis outline --- p.12 / Chapter 2 --- Incorporation of DC Coefficient Restoration into Foreground/Background coding --- p.13 / Chapter 2.1 --- Introduction --- p.13 / Chapter 2.2 --- A review of FB coding in H.263 sequence --- p.15 / Chapter 2.3 --- A review of DCCR --- p.18 / Chapter 2.4 --- DCCRFB coding --- p.23 / Chapter 2.4.1 --- Methodology --- p.23 / Chapter 2.4.2 --- Implementation --- p.24 / Chapter 2.4.3 --- Experimental results --- p.26 / Chapter 2.5 --- The use of block selection scheme in DCCRFB coding --- p.32 / Chapter 2.5.1 --- Introduction --- p.32 / Chapter 2.5.2 --- Experimental results --- p.34 / Chapter 2.6 --- Summary --- p.47 / Chapter 3 --- Chin contour estimation on foreground human faces --- p.48 / Chapter 3.1 --- Introduction --- p.48 / Chapter 3.2 --- Least mean square estimation of chin location --- p.50 / Chapter 3.3 --- Chin contour estimation using chin edge detector and contour modeling --- p.58 / Chapter 3.3.1 --- Face segmentation and facial organ extraction --- p.59 / Chapter 3.3.2 --- Identification of search window --- p.59 / Chapter 3.3.3 --- Edge detection using chin edge detector --- p.60 / Chapter 3.3.4 --- "Determination of C0, C1 and c2" --- p.63 / Chapter 3.3.5 --- Chin contour modeling --- p.67 / Chapter 3.4 --- Experimental results --- p.71 / Chapter 3.5 --- Summary --- p.77 / Chapter 4 --- Wire-frame model deformation and face animation using FAP --- p.78 / Chapter 4.1 --- Introduction --- p.78 / Chapter 4.2 --- Wire-frame face model deformation --- p.79 / Chapter 4.2.1 --- Introduction --- p.79 / Chapter 4.2.2 --- Wire-frame model selection and FDP generation --- p.81 / Chapter 4.2.3 --- Global deformation --- p.85 / Chapter 4.2.4 --- Local deformation --- p.87 / Chapter 4.2.5 --- Experimental results --- p.93 / Chapter 4.3 --- Face animation using FAP --- p.98 / Chapter 4.3.1 --- Introduction and methodology --- p.98 / Chapter 4.3.2 --- Experiments --- p.102 / Chapter 4.4 --- Summary --- p.112 / Chapter 5 --- Conclusions and future developments --- p.113 / Chapter 5.1 --- Contributions and conclusions --- p.113 / Chapter 5.2 --- Future developments --- p.117 / Appendix A H.263 bitstream syntax --- p.122 / Appendix B Excerpt of the FAP specification table [17] --- p.123 / Bibliography --- p.129
312

Robust and efficient techniques for automatic video segmentation.

January 1998 (has links)
by Lam Cheung Fai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 174-179). / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Problem Definition --- p.2 / Chapter 1.2 --- Motivation --- p.5 / Chapter 1.3 --- Problems --- p.7 / Chapter 1.3.1 --- Illumination Changes and Motions in Videos --- p.7 / Chapter 1.3.2 --- Variations in Video Scene Characteristics --- p.8 / Chapter 1.3.3 --- High Complexity of Algorithms --- p.10 / Chapter 1.3.4 --- Heterogeneous Approaches to Video Segmentation --- p.10 / Chapter 1.4 --- Objectives and Approaches --- p.11 / Chapter 1.5 --- Organization of the Thesis --- p.13 / Chapter 2 --- Related Work --- p.15 / Chapter 2.1 --- Algorithms for Uncompressed Videos --- p.16 / Chapter 2.1.1 --- Pixel-based Method --- p.16 / Chapter 2.1.2 --- Histogram-based Method --- p.17 / Chapter 2.1.3 --- Motion-based Algorithms --- p.18 / Chapter 2.1.4 --- Color-ratio Based Algorithms --- p.18 / Chapter 2.2 --- Algorithms for Compressed Videos --- p.19 / Chapter 2.2.1 --- Algorithms based on JPEG Image Sequences --- p.19 / Chapter 2.2.2 --- Algorithms based on MPEG Videos --- p.20 / Chapter 2.2.3 --- Algorithms based on VQ Compressed Videos --- p.21 / Chapter 2.3 --- Frame Difference Analysis Methods --- p.21 / Chapter 2.3.1 --- Scene Cut Detection --- p.21 / Chapter 2.3.2 --- Gradual Transition Detection --- p.22 / Chapter 2.4 --- Speedup Techniques --- p.23 / Chapter 2.5 --- Other Approaches --- p.24 / Chapter 3 --- Analysis and Enhancement of Existing Algorithms --- p.25 / Chapter 3.1 --- Introduction --- p.25 / Chapter 3.2 --- Video Segmentation Algorithms --- p.26 / Chapter 3.2.1 --- Frame Difference Metrics --- p.26 / Chapter 3.2.2 --- Frame Difference Analysis Methods --- p.29 / Chapter 3.3 --- Analysis of Feature Extraction Algorithms --- p.30 / Chapter 3.3.1 --- Pair-wise pixel comparison --- p.30 / Chapter 3.3.2 --- Color histogram comparison --- p.34 / Chapter 3.3.3 --- Pair-wise block-based comparison of DCT coefficients --- p.38 / Chapter 3.3.4 --- Pair-wise pixel comparison of DC-images --- p.42 / Chapter 3.4 --- Analysis of Scene Change Detection Methods --- p.45 / Chapter 3.4.1 --- Global Threshold Method --- p.45 / Chapter 3.4.2 --- Sliding Window Method --- p.46 / Chapter 3.5 --- Enhancements and Modifications --- p.47 / Chapter 3.5.1 --- Histogram Equalization --- p.49 / Chapter 3.5.2 --- DD Method --- p.52 / Chapter 3.5.3 --- LA Method --- p.56 / Chapter 3.5.4 --- Modification for pair-wise pixel comparison --- p.57 / Chapter 3.5.5 --- Modification for pair-wise DCT block comparison --- p.61 / Chapter 3.6 --- Conclusion --- p.69 / Chapter 4 --- Color Difference Histogram --- p.72 / Chapter 4.1 --- Introduction --- p.72 / Chapter 4.2 --- Color Difference Histogram --- p.73 / Chapter 4.2.1 --- Definition of Color Difference Histogram --- p.73 / Chapter 4.2.2 --- Sparse Distribution of CDH --- p.76 / Chapter 4.2.3 --- Resolution of CDH --- p.77 / Chapter 4.2.4 --- CDH-based Inter-frame Similarity Measure --- p.77 / Chapter 4.2.5 --- Computational Cost and Discriminating Power --- p.80 / Chapter 4.2.6 --- Suitability in Scene Change Detection --- p.83 / Chapter 4.3 --- Insensitivity to Illumination Changes --- p.89 / Chapter 4.3.1 --- Sensitivity of CDH --- p.90 / Chapter 4.3.2 --- Comparison with other feature extraction algorithms --- p.93 / Chapter 4.4 --- Orientation and Motion Invariant --- p.96 / Chapter 4.4.1 --- Camera Movements --- p.97 / Chapter 4.4.2 --- Object Motion --- p.100 / Chapter 4.4.3 --- Comparison with other feature extraction algorithms --- p.100 / Chapter 4.5 --- Performance of Scene Cut Detection --- p.102 / Chapter 4.6 --- Time Complexity Comparison --- p.105 / Chapter 4.7 --- Extension to DCT-compressed Images --- p.106 / Chapter 4.7.1 --- Performance of scene cut detection --- p.108 / Chapter 4.8 --- Conclusion --- p.109 / Chapter 5 --- Scene Change Detection --- p.111 / Chapter 5.1 --- Introduction --- p.111 / Chapter 5.2 --- Previous Approaches --- p.112 / Chapter 5.2.1 --- Scene Cut Detection --- p.112 / Chapter 5.2.2 --- Gradual Transition Detection --- p.115 / Chapter 5.3 --- DD Method --- p.116 / Chapter 5.3.1 --- Detecting Scene Cuts --- p.117 / Chapter 5.3.2 --- Detecting 1-frame Transitions --- p.121 / Chapter 5.3.3 --- Detecting Gradual Transitions --- p.129 / Chapter 5.4 --- Local Thresholding --- p.131 / Chapter 5.5 --- Experimental Results --- p.134 / Chapter 5.5.1 --- Performance of CDH+DD and CDH+DL --- p.135 / Chapter 5.5.2 --- Performance of DD on other features --- p.144 / Chapter 5.6 --- Conclusion --- p.150 / Chapter 6 --- Motion Vector Based Approach --- p.151 / Chapter 6.1 --- Introduction --- p.151 / Chapter 6.2 --- Previous Approaches --- p.152 / Chapter 6.3 --- MPEG-I Video Stream Format --- p.153 / Chapter 6.4 --- Derivation of Frame Differences from Motion Vector Counts --- p.156 / Chapter 6.4.1 --- Types of Frame Pairs --- p.156 / Chapter 6.4.2 --- Conditions for Scene Changes --- p.157 / Chapter 6.4.3 --- Frame Difference Measure --- p.159 / Chapter 6.5 --- Experiment --- p.160 / Chapter 6.5.1 --- Performance of MV --- p.161 / Chapter 6.5.2 --- Performance Enhancement --- p.162 / Chapter 6.5.3 --- Limitations --- p.163 / Chapter 6.6 --- Conclusion --- p.164 / Chapter 7 --- Conclusion and Future Work --- p.165 / Chapter 7.1 --- Contributions --- p.165 / Chapter 7.2 --- Future Work --- p.169 / Chapter 7.3 --- Conclusion --- p.171 / Bibliography --- p.174 / Chapter A --- Sample Videos --- p.180 / Chapter B --- List of Abbreviations --- p.183
313

Combining video and performance: a double performative engagement.

January 2006 (has links)
Zheng Bo. / Thesis (M.F.A.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 20-21). / Accompanying disc in DVD format. / Abstracts in English and Chinese.
314

Video decoder for H.264/AVC main profile power efficient hardware design.

January 2011 (has links)
Yim, Ka Yee. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (p. 43). / Abstracts in English and Chinese. / Acknowledgements --- p.vii / TABLE OF CONTENTS --- p.viii / LIST OF TABLES --- p.x / LIST OF FIGURES --- p.xi / Chapter CHAPTER 1 : --- INTRODUCTION --- p.1 / Chapter 1.1. --- Motivation --- p.1 / Chapter 1.2. --- Overview --- p.2 / Chapter 1.3. --- H.264 Overview --- p.2 / Chapter CHAPTER 2 : --- CABAC --- p.7 / Chapter 2.1. --- Introduction --- p.7 / Chapter 2.2. --- CABAC Decoder Implementation Review --- p.7 / Chapter 2.3. --- CABAC Algorithm Review --- p.9 / Chapter 2.4. --- Proposed CABAC Decoder Implementation --- p.13 / Chapter 2.5. --- FSM Method Bin Matching --- p.20 / Chapter 2.6. --- CABAC Experimental Results --- p.22 / Chapter 2.7. --- Summary --- p.26 / Chapter CHAPTER 3 : --- INTEGRATION --- p.27 / Chapter 3.1. --- Introduction --- p.27 / Chapter 3.2. --- Reused Baseline Decoder Review --- p.27 / Chapter 3.3. --- Integration --- p.30 / Chapter 3.4. --- Proposed Solution for Motion Vector Decoding --- p.33 / Chapter 3.5. --- Synthesis Result and Performance Analysis --- p.37 / Chapter CHAPTER 4 : --- CONCLUSION --- p.39 / Chapter 4.1. --- Main Contribution --- p.39 / Chapter 4.2. --- Reflection on the Development --- p.39 / Chapter 4.3. --- Future Work --- p.41 / BIBLIOGRAPHY --- p.43
315

Novel error resilient techniques for the robust transport of MPEG-4 video over error-prone networks. / CUHK electronic theses & dissertations collection

January 2004 (has links)
Bo Yan. / "May 2004." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (p. 117-131). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
316

Arbitrary block-size transform video coding. / CUHK electronic theses & dissertations collection

January 2011 (has links)
Besides ABT with higher order transform, a transform based template matching is also investigated. A fast method of template matching, called Fast Walsh Search, is developed. This search method has similar accuracy as exhaustive search but significantly lower computation requirement. / In this thesis, the development of simple but efficient order-16 transforms will be shown. Analysis and comparison with existing order-16 transforms have been carried out. The proposed order-16 transforms were integrated to the existing coding standard reference software individually so as to achieve a new ABT system. In the proposed ABT system, order-4, order-8 and order-16 transforms coexist. The selection of the most appropriate transform is based on the rate-distortion performance of these transforms. A remarkable improvement in coding performance is shown in the experiment results. A significant bit rate reduction can be achieved with our proposed ABT system with both subjective and objective qualities remain unchanged. / Prior knowledge of the coefficient distribution is a key to achieve better coding performance. This is very useful in many areas in coding such as rate control, rate distortion optimization, etc. It is also shown that coefficient distribution of predicted residue is closer to Cauchy distribution rather than traditionally expected Laplace distribution. This can effectively improve the existing processing techniques. / Three kinds of order-l 6 orthogonal DCT-like integer transforms are proposed in this thesis. The first one is the simple integer transform, which is expanded from existing order-8 ICT. The second one is the hybrid integer transform from the Dyadic Weighted Walsh Transform (DWWT). It is shown that it has a better performance than simple integer transform. The last one is a recursive transform. Order-2N transform can be derived from order-N one. It is very close to the DCT. This recursive transform can be implemented in two different ways and they are denoted as LLMICT and CSFICT. They have excellent coding performance. These proposed transforms are investigated and are implemented into the reference software of H.264 and AVS. They are also compared with other order-16 orthogonal integer transform. Experimental results show that the proposed transforms give excellent coding performance and ease to compute. / Transform is a very important coding tool in video coding. It decorrelates the pixel data and removes the redundancy among pixels so as to achieve compression. Traditionally, order-S transform is used in video and image coding. Latest video coding standards, such as H.264/AVC, adopt both order-4 and order-8 transforms. The adaptive use of more than one transforms of different sizes is known as Arbitrary Block-size Transform (ABT). Transforms other than order-4 and order-8 can also be used in ABT. It is expected larger transform size such as order-16 will benefit more in video sequences with higher resolutions such as nap and 1a8ap sequences. As a result, order-16 transform is introduced into ABT system. / Fong, Chi Keung. / Adviser: Wai Kuen Cham. / Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references. / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
317

The film of tomorrow : a cultural history of videoblogging

Berry, Trine Bjørkmann January 2015 (has links)
Videoblogging is a form of cultural production that emerged in the early 2000s as a result of the increasing availability of cheap digital recording equipment, new videoediting software, video website hosting and innovative distribution networks across the internet. This thesis explores the close entanglement of culture and technology in this early and under-examined area of media production – most notably in the self-definition and development of a specific community around video practices and technologies between 2004-2009. These videobloggers' digital works are presented as an original case study of material digital culture on the internet, which also produced a distinctive aesthetic style. The thesis traces the discourses and technological infrastructures that were developed both within and around the community of videobloggers and that created the important pre-conditions for the video artefacts they produced. Through an ethnographically-informed cultural history of the practices and technologies of videoblogging, this thesis engages with the way in which new forms of cultural and technical hybrids have emerged in an increasingly digital age. The ethnographic research is informed by histories of film and video, which contribute to the theoretical understanding and contextualisation of videoblogging – as an early digital community – which has been somewhat neglected in favour of research on mainstream online video websites, such as YouTube. The thesis also contributes to scholarly understanding of contemporary digital video practices, and explores how the history of earlier amateur and semi-professional film and video has been influential on the practices, technologies and aesthetic styles of the videobloggers. It is also shown how their aesthetic has been drawn on and amplified in network culture, mainstream media, and contemporary media and cultural production. Through a critical mapping of the socio-technical structures of videoblogging, the thesis argues that the trajectories of future media and cultural production draws heavily from the practices and aesthetics of these early hybrid networked cultural-technical communities.
318

Diversão e prazer declarados por crianças que jogam WII® : entre o real e o virtual /

Schiavon, Mauro Klebis. January 2012 (has links)
Orientador: Afonso Antonio Machado / Banca: Roberto Tadeu Iaochite / Banca: Eloísa Hilsdorf Rocha Gimenez / Resumo: Nas últimas duas décadas do século XX e nesse início de século XXI temos visto um crescimento acentuado de crianças que se utilizam da nova tecnologia dos videogames para se divertirem, quer seja com amigos, familiares ou mesmo sozinhas. Quando surgiram, os videogames eram jogados pelos jovens e adultos, mas esta tendência modificou-se e agora os videogames começaram a ser jogados pelas crianças. Os estados emocionais são pouco estudados na relação entre criança e videogame. As emoções são imprescindíveis nas tomadas de decisões, das mais simples as mais complexas. Elas são fundamentais também para a sociabilidade, além de organizar a forma como os dados e os acontecimentos são armazenados na memória. O objetivo deste estudo foi compreender as influências do videogame Wii® na vida da criança, de modo a identificar, descrever e analisar os motivos e as emoções declaradas por crianças usuárias de jogos virtuais do tipo Wii®, tentando ainda compreender a relação estabelecida entre o real e o virtual, pela criança, diante dos tais jogos virtuais. Para tanto, a metodologia utilizada foi embasada na pesquisa qualitativa onde coletamos os dados através da observação sistemática de crianças jogando Wii®, fornecendo o registro naquilo que chamamos de caderno de campo e o segundo instrumento de coleta de dados presente em nosso trabalho foi a entrevista. Esta técnica diz respeito à prestação de informações ou opiniões sobre determinada temática, feita de forma oral, pelo entrevistado. Foi possível constatar que, os sujeitos desta pesquisa não reconhecem aspectos relacionados com a vida real nos jogos de Wii®. Concluímos que as razões mais evidentes para as crianças jogarem Wii® são: a diversão e o prazer; a sensação de serem capazes de alcançar objetivos e, deste modo, sentirem... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: In the last two decades of the 20th century and early 21st century have seen a dramatic growth of children who use the new technology of video games to enjoy themselves, whether with family, friends or even alone. When surfaced, the video games were played by young people and adults, but this trend has changed and now video games began to be played by children. Emotional States are little studied in the relationship between child and videogame. Emotions are indispensable in decision-making, from the most simple to the most complex. They are also fundamental to sociability, in addition to organizing the way data and events are stored in memory. The objective of this study was to understand the influences of videogame Wii® in the child's life, to identify, describing, and analyzing the reasons and declared emotions by child users of virtual Games Wii® type, still trying to understand the relationship established between the real and the virtual, by the child, in the face of such virtual games. For this purpose, the methodology used was based on qualitative research where we collect data through systematic observation of children playing Wii®, providing the record what we call field notebook and the second data collection instrument present in our work was the interview. This technique relates to the provision of information or opinions on a particular subject, made orally, for the respondent. It was possible to see that the subject of this survey do not recognize aspects related to real life Games Wii®. We conclude that the most obvious reasons for children playing Wii® are fun and pleasure, the feeling of being able to reach goals and thus feel that are good things; the possibility of interaction with the other factor that allows... (Complete abstract click electronic access below) / Mestre
319

Adaptive coding and rate control of video signals / CUHK electronic theses & dissertations collection

January 2015 (has links)
As the bandwidth has become much cheaper in the recent years, video applications are more popular than before. However, the demand of high video resolution, high frame rate, or high bit-depth has continued to increase more rapidly than the cost of video transmission and storage bandwidth. It requires more efficient compression techniques, and hence many international video coding standards are developed in the past decades such as the MPEG-1/2/4 part 2, H.264/MPEG-4 part 10 AVC and the latest High Efficiency Video Coding (HEVC) standards. The main objective of this thesis is to consider the problems in analyzing the characteristics of video signals and providing efficient compression and transmission solutions in both H.264/AVC and HEVC video systems. Three main parts of this work are briey summarized below. / The first part concerns transform coding. Transform coding has been widely used to remove spatial redundancy of prediction residuals in the modern video coding standards. However, since the residual blocks exhibit diverse characteristics in a video sequence, conventional sinusoidal transforms with fixed transform kernels may result in low coding efficiency. To tackle this problem, we propose a novel content adaptive transform framework for H.264/AVC-based video coding. We propose to utilize pixel rearrangement to dynamically adjust the transform kernels to adapt to the video signals. In addition, unlike the traditional adaptive transforms, the proposed method obtains the transform kernels from the reconstructed block, and hence it consumes only one logic indicator for each transform unit. Moreover, a spiral-scanning method is developed to reorder the transform coefficients for better entropy coding. Experimental results on the Key Technical Area (KTA) platform show that the proposed method can achieve a significant bits reduction under both the all-intra and low-delay configurations. / The second part investigates the next-generation video coding. Due to increase of display resolution from High-definition (HD) to Ultra-HD, how to efficiently compress the Ultra-HD signals are essential in the development of future video compression systems. High-resolution video coding benefits from a larger prediction block size and thereof transform and quantization of prediction residues. However, in the current HEVC video coding standard, the maximum coding tree unit (CTU) size is 64x64, which can limit a possible larger prediction block in Ultra-HD video coding, and hence cause negative effects on coding efficiency. Thus, we propose to extend CTU to a super coding unit (SCU) for next-generation video coding, and two separate coding structures are designed to encode a SCU, including Direct-CTU and SCU-to-CTU modes. In Direct-CTU, an SCU is first split into a number of predefined CTUs, and then, the best encoding parameters are searched from the current CTU to the possible minimum coding unit (MCU). Similarly, in SCU-to-CTU, the best encoding parameters are searched from SCU to CTU. In addition, the adaptive loop filter (ALF) and sample adaptive offset (SAO) methods are investigated in SCU based video coding framework. We propose to change the filtering control from SCU level to the coding unit (CU) level, and an improved CU level ALF signaling method is also proposed to further improve the coding efficiency. Furthermore, an adaptive SAO block method is also proposed, and this flexibility of SAO blocks can further improve the performance of the traditional method in the Ultra HD video coding. / In the last part, we explore the bit rate control of video transmission. Rate control serves as an important technique to regulate the bit rate of video transmission over a limited bandwidth and to maximize the overall video quality. Video quality fluctuation plays a key role in human visual perception, and hence many rate control algorithms have been widely developed to maintain a consistent quality for video communication. We propose a novel rate control framework based on the Lagrange multiplier in HEVC. With the assumption of constant quality control, a new relationship between the distortion and the Lagrange multiplier is established. Based on the proposed distortion model and buffer status, we obtain a computationally feasible solution to the problem of minimizing the distortion variation across video frames at the coding tree unit level. Extensive simulation results show that our method outperforms the HEVC rate control by providing a more accurate rate regulation, lower video quality fluctuation and stabler buffer fullness. / 近些年,隨著帶寬費用變得越來越便宜,各種視頻應用比以前更為流行了。然而,人們對于高視頻分辨率,高幀率,或更高比特像素的需求增加了視頻傳輸和存儲帶寬的成本。滿足這樣的需求需要更有效的壓縮技術,因此在過去的幾十年裏,很多國際視頻編碼標准被開發出來,例如MPEG-1/2/4 part2, H264/MPEG-4 part 10 AVC和最新高效視頻編碼標准(HEVC)。本論文的主要目的是研究視頻信號的特點,在H.264和HEVC視頻系統中提供高效的壓縮和傳輸解決方案。論文分三部分,簡要總結如下。 / 第壹部分涉及變換編碼。在現代視頻編碼標准中,變換編碼已被廣泛用于消除預測殘差的空間冗余度。然而,由于在視頻序列中的預測殘差塊有著不同的特性,傳統的變換采用固定變換矩陣可能會導致低的編碼效率。為了解決這個問題,我們提出了壺種新的基于內容自適應變換方案的視頻編碼框架。我們利用重排像素,動態調整的變換矩陣以適應當前的視頻信號。此外,與傳統的自適應變換不同之處在于,我們所提出的方法得到的變換矩陣不需要傳輸到解碼端,而它僅消耗壺個邏輯單元指示當前變換矩陣。此外,我們提出了相應的變換系數掃描方法以達到更有效的熵編碼。在關鍵技術領域(KTA)平台,實驗結果表明本方法可以有效的改善幀內和低延遲的配置下的編碼效率。 / 第二部分探討了新壹代視頻編碼。由于主流顯示分辨率從高清到超高清的變化,如何有效地壓縮超高清視頻信號是未來視頻壓縮技術發展的關鍵。超高分辨率視頻編碼的好處在于可從壹個更大的預測塊對其預測殘差進行變換和量化。然而,在目前HEVC視頻編碼標準,最大編碼榭單元尺寸(CTU)是64x64,其可能限制較大的預測塊,從而影響編碼效率。因此,我們提出了擴展CTU為SCU。其中編碼壹個SCU可能用到兩個獨立的編碼模式,包括Direct-CTU和SCU-to-CTU。在Direct-CTU模式中,SCU被分割成許多預定義的CTUs,然後,最佳的編碼參數搜索範圍為CTU到MCU。同樣,在SCU-to-CTU模式中,最佳的編碼參數搜索範圍是SCU到CTU。此外,自適應環路濾波器(ALF)和自適應采偏移(SAO)在新的SCU編碼框架下進行了研究。我們提出濾波控制從SCU級別更改為CU級別,並提出了新的ALF信號傳送方法進壹步提高傳統的方法在超高清視頻編碼的中性能。 / 在最後壹部分,我們探討了視頻傳輸中的碼率控制。碼率控制作為壹種重要的技術,在有限的帶寬條件下,以最大限度地提高整體的視頻質量。視頻質量波動在人眼視覺感知中起著至關重要的作用,因此許多碼率控制方法得到了廣泛的發展,以追求提供穩定的視頻通信質量。我們提出了壹個新基于HEVC拉格日乘數碼率控制框架。在平穩視頻質量的假設下,我們提出了壹種新的失真和拉格日乘子之間的關係。基于新提出的失真模型和緩沖區的狀態,我們得到壹個計算上可行的解決方案,以最大限度地減少在編碼榭單元級的視頻幀的失真變化。大量的仿真結果表明,我們的方法優于HEVC的碼率控制,它可以提供更精確的碼率調節,降低視頻質量波動,以及維護穩定的緩沖區占有率。 / Wang, Miaohui. / Thesis Ph.D. Chinese University of Hong Kong 2015. / Includes bibliographical references (leaves 158-164). / Abstracts and acknowledgements also in Chinese. / Title from PDF title page (viewed on 11, October, 2016). / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only.
320

Scalable video coding by stream morphing

Macnicol, James Roy. January 2002 (has links) (PDF)
"October 2002 (Revised May 2003)"--T.p. Includes bibliographical references (leaves 256-264).

Page generated in 0.1105 seconds