Return to search

Video motion estimation and noise reduction.

隨著數碼相機、攝影手機以及監控攝像機的快速普及,每天無數的視頻錄像被創造出來。運動估計是視頻處理中的一種基本問題,這個問題通常被稱為光流估計。現有光流估計算法無法處理發生較大尺度變化的視頻。但尺度變化在視頻和照片中非常普遍,所以尺度不變性的光流估計算法對於其他視頻處理操作諸如圖像去噪算法有很大幫助。所以我們提出新的方法來解決這個問題,以建立兩幀圖像不同尺度像素之間的稠密匹配。我們提出一個新的框架,引入像素級精度的尺度參數,然後提出一種有效的數值計算機制,迭代地優化離散尺度參數和連續光流參數。這個機制顯著地拓展了光流估計在包含各種類型運動的自然場景的實用性。 / 各種攝像設備獲得的視頻都不同程度地遭到噪聲的破壞。雖然已經提出許多視頻去噪算法,但在實際應用中仍然存在許多問題。所以,我們設計一種複雜度很低而且有效的實時視頻去噪算法。我們在視頻去噪的過程中引入高品質的光流估計來校準圖像序列。我們還設計了一種加權平均算法來從之前校準的原始視頻幀中恢復出沒有噪聲的圖像。實驗結果表明相比于其他算法,我們的算法能恢復出更多的細節。更重要的是,我們的算法保證視頻的時域連貫性,對視頻質量來說非常重要。 / 最後,我們還研究了光照不足的環境下拍攝的視頻和圖像中常見的有色噪聲現象。這種噪聲沒有辦法被現有算法有效地去除,因為它們通常假設噪聲是一個高斯或泊松分佈。根據我們對亮度噪聲和色度噪聲的觀察和分析,我們提出了一種新的去噪方法。我們採用了多分辨率雙重雙邊濾波的方法,借用現有算法去噪的亮度層來引導色度層的去噪。實驗表明,視覺和數據評價都表明了我們算法的有效性。 / With the popularity of digital cameras, mobile phone cameras and surveillance systems, numerous video clips are created everyday. Motion estimation is one of the fundamental tasks in video processing. Current optical flow estimation algorithms cannot deal with frames that are with large scale variation. Because scale variation commonly arises in images/videos, a scale invariant optical flow algorithm is important and fundamental for other video operations such as video denoising. In light of this, we propose a new method, aiming to establish dense correspondence between two frames containing pixels in different scales. We contribute a new framework taking pixel-wise scale into consideration in optical flow estimation and propose an effective numerical scheme, which iteratively optimizes discrete scale variables and continuous flow ones. This scheme notably expands the practicality of optical flow in natural scenes containing different types of object movements. / Further, Videos captured by all kinds of sensors are generally contaminated by noise. Although lots of algorithms are published, there are still many problems when applying them to real cases. We design a low-complexity but effective real-time video denoising framework by integrating robust optical flow estimation into the denoising process to register locally frame sequences and designing a weighted averaging algorithm to restore a latent clean frame from a sequence of well registered frames. Experiments show that our algorithm recovers more details than other state-of-the-art video denoising algorithms. More importantly our method preserves temporal coherence, which is vital for videos. / Lastly, we study the chrominance noise which is commonly observed in both videos and images taken under insuficient light conditions. This kind of noise cannot be effectively reduced by state-of-the-art denoising methods under the assumption of a Gaussian or Poisson distributions. Based on the observation of the different characteristics of luminance and chrominance noise, we propose a new denoising strategy that employs multi-resolution dual bilateral filtering on chrominance layers un¬der the guidance of well-estimated luminance layer. Both visual and quantitative evaluation demonstrates the effectiveness of our algorithm. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Dai, Zhenlong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 81-90). / Abstracts also in Chinese. / Abstract --- p.i / Acknowledgement --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation and Objectives --- p.1 / Chapter 1.2 --- Our Contributions --- p.6 / Chapter 1.3 --- Thesis Outline --- p.8 / Chapter 2 --- Background --- p.10 / Chapter 2.1 --- Optical Flow Estimation --- p.10 / Chapter 2.2 --- Single Image Denoising --- p.15 / Chapter 2.3 --- Multi-image and Video Denoising --- p.17 / Chapter 3 --- Scale Invariant Optical Flow --- p.20 / Chapter 3.1 --- Related Work --- p.23 / Chapter 3.2 --- Optical Flow Model with Scale Variables --- p.25 / Chapter 3.3 --- Optimization --- p.31 / Chapter 3.3.1 --- Computing E[zi] --- p.32 / Chapter 3.3.2 --- Minimizing Optical Flow Energy --- p.32 / Chapter 3.3.3 --- Overall Computation Framework --- p.34 / Chapter 3.4 --- Experiments --- p.37 / Chapter 3.4.1 --- Evaluation of Our Model to Handle Scales . --- p.37 / Chapter 3.4.2 --- Comparison with Other Optical Flow Methods . --- p.38 / Chapter 3.4.3 --- Comparison with Sparse Feature Matching . --- p.43 / Chapter 3.4.4 --- Evaluation on the Middlebury Dataset --- p.44 / Chapter 3.5 --- Summary --- p.46 / Chapter 4 --- Optical Flow Based Video Denoising --- p.47 / Chapter 4.1 --- Related Work --- p.48 / Chapter 4.2 --- Optical Flow based Video Denoising Framework --- p.48 / Chapter 4.2.1 --- Registration --- p.48 / Chapter 4.2.2 --- Accumulation --- p.52 / Chapter 4.2.3 --- Algorithm Implementation --- p.53 / Chapter 4.3 --- Experimental Results --- p.54 / Chapter 4.3.1 --- Comparisons with other algorithms --- p.54 / Chapter 4.3.2 --- Applications --- p.55 / Chapter 4.4 --- Limitation and Future Work --- p.55 / Chapter 4.5 --- Summary --- p.59 / Chapter 5 --- Chrominance Noise Reduction --- p.62 / Chapter 5.1 --- Related work --- p.65 / Chapter 5.2 --- Luminance and Chrominance Noise Characteristics --- p.68 / Chapter 5.3 --- Luminance and Chrominance Relationship --- p.69 / Chapter 5.4 --- Algorithm --- p.71 / Chapter 5.4.1 --- Dual Bilateral Filter --- p.71 / Chapter 5.4.2 --- Multi-resolution Framework --- p.72 / Chapter 5.5 --- Experiments --- p.72 / Chapter 5.5.1 --- Quantitative Evaluation --- p.73 / Chapter 5.5.2 --- Visual Comparison for Natural Noisy Images --- p.74 / Chapter 5.5.3 --- Applications --- p.75 / Chapter 5.6 --- Summary --- p.75 / Chapter 6 --- Conclusion --- p.79 / Bibliography --- p.82

Identiferoai:union.ndltd.org:cuhk.edu.hk/oai:cuhk-dr:cuhk_328446
Date January 2012
ContributorsDai, Zhenlong., Chinese University of Hong Kong Graduate School. Division of Computer Science and Engineering.
Source SetsThe Chinese University of Hong Kong
LanguageEnglish, Chinese
Detected LanguageEnglish
TypeText, bibliography
Formatelectronic resource, electronic resource, remote, 1 online resource (xii, 90 leaves) : ill. (some col.)
RightsUse of this resource is governed by the terms and conditions of the Creative Commons “Attribution-NonCommercial-NoDerivatives 4.0 International” License (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Page generated in 0.002 seconds