視覺質量評估在各種多媒體應用中起到了關鍵性的作用。因為人類的視覺系統是視覺信號的最終接收髓,王觀視覺質量評估被認為是最可靠的視覺質量評估方法。然而,王觀視覺質量評估耗時、昂貴,並且不適合線上應用。因此,自動的、客觀的視覺質量評估方法已經被開發並被應用於很多實用埸合當中。最廣泛使用的客觀視覺質量度量方法,如均方差(MSE) 、峰值信噪比(PSNR) 等與人IN對視覺信號質量的判斷相距甚遠。因此,開發更準確的客觀質量度量算法將會成為未來視覺信號處理和傳輸應用成功與否的重要因素。 / 該論文主要研究全參考客觀視覺質量度量算法。主要內容分為三部分。 / 第一部分討論圖像質量評估。首先研究了一個經典的圖像質量度量算法--SSIM。提出了個新的加權方法並整合至IjSSIM 當中,提升了SSIM自可預測精度。之後,受到前面這個工作的故發,設計7 個全新的圖像質量度量算法,將噪聲分類為加性噪聲和細節失兩大類。這個算法在很多主觀質量圓像資料庫上都有很優秀的預測表現。 / 第二部分研究視頻質量評估。首先,將上面提到的全新的圓像質量度量算法通過挖掘視頻運動信息和時域相關的人眼視覺特性擴展為視頻質量度量算法。方法包括:使用基於人自民運動的時空域對比敏感度方程,使用基於運動崗量的時域視覺掩蓋,使用基於認知層面的空域整合等等。這個算法被證明對處理標清和高清序列同樣有效。其次,提出了一個測量視頻順間不一致程度的算法。該算法被整合到MSE 中,提高了MSE的預測表現。 / 上面提到的算法只考慮到了亮度噪聲。論文的最後部分通過個具體應用色差立體圓像生成究了色度噪聲。色差立體圖像是三維立體顯示技衛的其中種方法。它使在普通電視、電腦顯示器、甚至印刷品上顯示三維立體效果成為可能。我們提出了一個新的色差立體圖像生成方法。該方法工作在CIELAB彩色空間,並力圖匹配原始圖像與觀測立體圖像的色彩屬性值。 / Visual quality assessment (VQA) plays a fundamental role in multimedia applications. Since the human visual system (HVS) is the ultimate viewer of the visual information, subjective VQA is considered to be the most reliable way to evaluate visual quality. However, subjective VQA is time-consuming, expensive, and not feasible for on-line manipulation. Therefore, automatic objective VQA algorithms, or namely visual quality metrics, have been developed and widely used in practical applications. However, it is well known that the popular visual quality metrics, such as Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), etc., correlate poorly with the human perception of visual quality. The development of more accurate objective VQA algorithms becomes of paramount importance to the future visual information processing and communication applications. / In this thesis, full-reference objective VQA algorithms are investigated. Three parts of the work are discussed as briefly summarized below. / The first part concerns image quality assessment. It starts with the investigation of a popular image quality metric, i.e., Structural Similarity Index (SSIM). A novel weighting function is proposed and incorporated into SSIM, which leads to a substantial performance improvement in terms of matching subjective ratings. Inspired by this work, a novel image quality metric is developed by separately evaluating two distinct types of spatial distortions: detail losses and additive impairments. The pro- posed method demonstrates the state-of-the-art predictive performance on most of the publicly-available subjective quality image databases. / The second part investigates video quality assessment. We extend the proposed image quality metric to assess video quality by exploiting motion information and temporal HVS characteristics, e.g., eye movement spatio-velocity contrast sensitivity function, temporal masking using motion vectors, temporal pooling considering human cognitive behaviors, etc. It has been experimentally verified that the proposed video quality metric can achieve good performance on both standard-definition and high-definition video databases. We also propose a novel method to measure temporal inconsistency, an essential type of video temporal distortions. It is incorporated into the MSE for video quality assessment, and experiments show that it can significantly enhance MSE's predictive performance. / The aforementioned algorithms only analyze luminance distortions. In the last part, we investigate chrominance distortions for a specific application: anaglyph image generation. Anaglyph image is one of the 3D displaying techniques, which enables stereoscopic perception on traditional TVs, PC monitors, projectors, and even papers. Three perceptual color attributes are taken into account for the color distortion measure, i.e., lightness, saturation, and hue, based on which a novel anaglyph image generation algorithm is developed via approximation in the CIELAB color space. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Li, Songnan. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 122-130). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / Dedication --- p.ii / Acknowledgments --- p.iii / Abstract --- p.vi / Publications --- p.viii / Nomenclature --- p.xii / Contents --- p.xvii / List of Figures --- p.xx / List of Tables --- p.xxii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation and Objectives --- p.1 / Chapter 1.2 --- Overview of Subjective Visual Quality Assessment --- p.3 / Chapter 1.2.1 --- Viewing condition --- p.4 / Chapter 1.2.2 --- Candidate observer selection --- p.4 / Chapter 1.2.3 --- Test sequence selection --- p.4 / Chapter 1.2.4 --- Structure of test session --- p.5 / Chapter 1.2.5 --- Assessment procedure --- p.6 / Chapter 1.2.6 --- Post-processing of scores --- p.7 / Chapter 1.3 --- Overview of Objective Visual Quality Assessment --- p.8 / Chapter 1.3.1 --- Classification --- p.8 / Chapter 1.3.2 --- HVS-model-based metrics --- p.9 / Chapter 1.3.3 --- Engineering-based metrics --- p.21 / Chapter 1.3.4 --- Performance evaluation method --- p.28 / Chapter 1.4 --- Thesis Outline --- p.29 / Chapter I --- Image Quality Assessment --- p.32 / Chapter 2 --- Weighted Structural Similarity Index based on Local Smoothness --- p.33 / Chapter 2.1 --- Introduction --- p.33 / Chapter 2.2 --- The Structural Similarity Index --- p.33 / Chapter 2.3 --- Influence of the Smooth Region on SSIM --- p.35 / Chapter 2.3.1 --- Overall performance analysis --- p.35 / Chapter 2.3.2 --- Performance analysis for individual distortion types --- p.37 / Chapter 2.4 --- The Proposed Weighted-SSIM --- p.40 / Chapter 2.5 --- Experiments --- p.41 / Chapter 2.6 --- Summary --- p.43 / Chapter 3 --- Image Quality Assessment by Decoupling Detail Losses and Additive Impairments --- p.44 / Chapter 3.1 --- Introduction --- p.44 / Chapter 3.2 --- Motivation --- p.45 / Chapter 3.3 --- Related Works --- p.47 / Chapter 3.4 --- The Proposed Method --- p.48 / Chapter 3.4.1 --- Decoupling additive impairments and useful image contents --- p.48 / Chapter 3.4.2 --- Simulating the HVS processing --- p.56 / Chapter 3.4.3 --- Two quality measures and their combination --- p.58 / Chapter 3.5 --- Experiments --- p.59 / Chapter 3.5.1 --- Subjective quality image databases --- p.59 / Chapter 3.5.2 --- Parameterization --- p.60 / Chapter 3.5.3 --- Overall performance --- p.61 / Chapter 3.5.4 --- Statistical significance --- p.62 / Chapter 3.5.5 --- Performance on individual distortion types --- p.64 / Chapter 3.5.6 --- Hypotheses validation --- p.66 / Chapter 3.5.7 --- Complexity analysis --- p.69 / Chapter 3.6 --- Summary --- p.70 / Chapter II --- Video Quality Assessment --- p.71 / Chapter 4 --- Video Quality Assessment by Decoupling Detail Losses and Additive Impairments --- p.72 / Chapter 4.1 --- Introduction --- p.72 / Chapter 4.2 --- Related Works --- p.73 / Chapter 4.3 --- The Proposed Method --- p.74 / Chapter 4.3.1 --- Framework --- p.74 / Chapter 4.3.2 --- Decoupling additive impairments and useful image contents --- p.75 / Chapter 4.3.3 --- Motion estimation --- p.76 / Chapter 4.3.4 --- Spatio-velocity contrast sensitivity function --- p.77 / Chapter 4.3.5 --- Spatial and temporal masking --- p.79 / Chapter 4.3.6 --- Two quality measures and their combination --- p.80 / Chapter 4.3.7 --- Temporal pooling --- p.81 / Chapter 4.4 --- Experiments --- p.82 / Chapter 4.4.1 --- Subjective quality video databases --- p.82 / Chapter 4.4.2 --- Parameterization --- p.83 / Chapter 4.4.3 --- With/without decoupling --- p.84 / Chapter 4.4.4 --- Overall predictive performance --- p.85 / Chapter 4.4.5 --- Performance on individual distortion types --- p.88 / Chapter 4.4.6 --- Cross-distortion performance evaluation --- p.89 / Chapter 4.5 --- Summary --- p.91 / Chapter 5 --- Temporal Inconsistency Measure --- p.92 / Chapter 5.1 --- Introduction --- p.92 / Chapter 5.2 --- The Proposed Method --- p.93 / Chapter 5.2.1 --- Implementation --- p.93 / Chapter 5.2.2 --- MSE TIM --- p.94 / Chapter 5.3 --- Experiments --- p.96 / Chapter 5.4 --- Summary --- p.97 / Chapter III --- Application related to Color and 3D Perception --- p.98 / Chapter 6 --- Anaglyph Image Generation --- p.99 / Chapter 6.1 --- Introduction --- p.99 / Chapter 6.2 --- Anaglyph Image Artifacts --- p.99 / Chapter 6.3 --- Related Works --- p.101 / Chapter 6.3.1 --- Simple anaglyphs --- p.101 / Chapter 6.3.2 --- XYZ and LAB anaglyphs --- p.102 / Chapter 6.3.3 --- Ghosting reduction methods --- p.103 / Chapter 6.4 --- The Proposed Method --- p.104 / Chapter 6.4.1 --- Gamma transfer --- p.104 / Chapter 6.4.2 --- Converting RGB to CIELAB --- p.105 / Chapter 6.4.3 --- Matching color appearance attributes in CIELAB color space --- p.106 / Chapter 6.4.4 --- Converting CIELAB to RGB --- p.110 / Chapter 6.4.5 --- Parameterization --- p.111 / Chapter 6.5 --- Experiments --- p.112 / Chapter 6.5.1 --- Subjective tests --- p.112 / Chapter 6.5.2 --- Results and analysis --- p.113 / Chapter 6.5.3 --- Complexity --- p.115 / Chapter 6.6 --- Summary --- p.115 / Chapter 7 --- Conclusions --- p.117 / Chapter 7.1 --- Contributions of the Thesis --- p.117 / Chapter 7.2 --- Future Research Directions --- p.120 / Bibliography --- p.122
Identifer | oai:union.ndltd.org:cuhk.edu.hk/oai:cuhk-dr:cuhk_328098 |
Date | January 2012 |
Contributors | Li, Songnan., Chinese University of Hong Kong Graduate School. Division of Electronic Engineering. |
Source Sets | The Chinese University of Hong Kong |
Language | English, Chinese |
Detected Language | English |
Type | Text, bibliography |
Format | electronic resource, electronic resource, remote, 1 online resource (xxii, 130 leaves) : ill. (chiefly col.) |
Rights | Use of this resource is governed by the terms and conditions of the Creative Commons “Attribution-NonCommercial-NoDerivatives 4.0 International” License (http://creativecommons.org/licenses/by-nc-nd/4.0/) |
Page generated in 0.0031 seconds