• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 11
  • 6
  • 1
  • Tagged with
  • 28
  • 28
  • 28
  • 28
  • 12
  • 12
  • 10
  • 8
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Image-based motion estimation and deblurring. / CUHK electronic theses & dissertations collection

January 2010 (has links)
Lastly, in the context of motion deblurring, we discuss a few new motion deblurring problems that are significant to blur kernel estimation and nonblind deconvolution. We found that strong edges do not always profit kernel estimation, but instead under certain circumstance degrade it. This finding leads to a new metric to measure the usefulness of image edges in motion deblurring and a gradient selection process to mitigate their possible adverse effect. It makes possible to solve for very large blur PSFs which easily fail existing blind deblurring methods. We also propose an efficient and high-quality kernel estimation method based on the spatial prior and the iterative support detection (ISD) kernel refinement, which avoids hard threshold of the kernel elements to enforce sparsity. We employ the TV-ℓ1 deconvolution model, solved with a new variable substitution scheme to robustly suppress noise. / This thesis covers complete discussion of motion estimation and deblurring and presents new methods to tackle them. In the context of motion estimation, we study the problem of estimating 2D apparent motion from two or more input images, referred to as optical flow estimation. We discuss several specific fundamental problems in existing optical flow estimation frameworks, including 1) estimating flow vectors for textureless and occluded regions, which was regarded as infeasible and with large ambiguities, and 2) the incapability of the commonly employed coarse-to-fine multi-scale scheme to preserve motion structures in several challenging circumstances. / To address the problem of multi-scale estimation, we extend the coarse-to-fine scheme by complementing the initialization at each scale with sparse feature matching, based on the observation that fine motion structures, especially those with significant and abrupt displacement transition, cannot always be correctly reconstructed owing to an incorrect initialization. We also introduce the adaption of the objective function and development of a new optimization procedure, which constitute a unified system for both large- and small-displacement optical flow estimation. The effectiveness of our method is borne out by extensive experiments on small-displacement benchmark dataset as well as the challenging large-displacement optical flow data. / To further increase the sub-pixel accuracy, we study how resolution changes affect the flow estimates. We show that by simple upsampling, we can effectively reduce errors for sub-pixel correspondence. In addition, we identify the regularization bias problem and explore its relationship to the image resolution. We propose a general fine-to-coarse framework to compute sub-pixel color matching for different computer vision problems. Various experiments were performed on motion estimation and stereo matching data. We are able to reduce errors by up to 30%, which would otherwise be very difficult to achieve through other conventional optimization methods. / We propose novel methods to solve these problems. Firstly, we introduce a segmentation based variational model to regularize flow estimates for textureless and occluded regions. Parametric and Non-parametric optical flow models are combined, using a confidence map to measure the rigidity of the moving regions. The resulted flow field is with high quality even at motion discontinuity and textureless regions and is very useful for applications such as video editing. / Xu, Li. / Adviser: Jiaya Jia. / Source: Dissertation Abstracts International, Volume: 73-03, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 126-137). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.

Motif-based method for patterned texture defect detection

Ngan, Yuk-tung, Henry., 顏旭東. January 2008 (has links)
published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy

Single image haze removal using dark channel prior. / CUHK electronic theses & dissertations collection

January 2011 (has links)
But haze removal is highly challenging due to its mathematical ambiguity, typically when the input is merely a single image. In this thesis, we propose a simple but effective image prior, called dark channel prior, to remove haze from a single image. The dark channel prior is a statistical property of outdoor haze-free images: most patches in these images should contain pixels which are dark in at least one color channel. Using this prior with a haze imaging model, we can easily recover high quality haze-free images. Experiments demonstrate that this simple prior is powerful in various situations and outperforms many previous approaches. / Haze is a natural phenomenon that obscures scenes, reduces visibility, and changes colors. It is an annoying problem for photographers since it degrades image quality. It is also a threat to the reliability of many applications, like outdoor surveillance, object detection, and aerial imaging. So removing haze from images is important in computer vision/graphics. / Speed is an important issue in practice. Like many computer vision problems, the time-consuming step in haze removal is to combine pixel-wise constraints with spatial continuities. In this thesis, we propose two novel techniques to solve this problem efficiently. The first one is an unconventional large-kernel-based linear solver. The second one is a generic edge-aware filter which enables real-time performance. This filter is superior in various applications including haze removal, in terms of speed and quality. / The human visual system is able to perceive haze, but the underlying mechanism remains unknown. In this thesis, we present new illusions showing that the human visual system is possibly adopting a mechanism similar to the dark channel prior. Our discovery casts new insights into human vision research in psychology and physiology. It also reinforces the validity of the dark channel prior as a computer vision algorithm, because a good way for artificial intelligence is to mimic human brains. / He, Kaiming. / Adviser: Xiaoou Tang. / Source: Dissertation Abstracts International, Volume: 73-06, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 131-138). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.

Fast and efficient algorithms for TV image restoration. / 基於變分原理的快速有效的圖像重構方法 / CUHK electronic theses & dissertations collection / Ji yu bian fen yuan li de kuai su you xiao de tu xiang chong gou fang fa

January 2010 (has links)
In Part I of the thesis, we focus on the fast and efficient algorithms for the TV-L1 minimization problem which can be applied to recover the blurred images corrupted by impulse noise. We construct the half-quadratic algorithm (HQA) for TV-L1 image restoration based on the half-quadratic technique. By introducing the proximal point algorithm into the HQA, we then obtain a modified HQA. We call it the proximal point half-quadratice algorithm (PHA). We introduce the PHA aiming to decrease the condition number of the coefficient matrix as updating the iterator in HQA. Until recently, there have been many efficient methods to solve the TV-L1 minimization problem. Examples are the primal-dual method, the fast total variational deconvolution method (FTVDM), and the augmented Lagrangian method (ALM). By numerical results of the FTVDM and ALM, we see that the images restored by these methods may sometimes appear to be blocky. Come back to our methods. The HQA and the PHA are both fast and efficient algorithms to solve the TV-L1 minimization problem. We prove that our algorithms are both majorize-minimize algorithms for solving a regularized TV-L1 problem. Given the assumption ker(∇)∩ker(BT B) = {0}, the convergence and linear convergence of the HQA is then easily obtained. Without such an assumption, a convergence result of PHA is also obtained. We apply our algorithms to deblur images corrupted with impulse noise. The results show that the HQA is faster and more accurate than the ALM and FTVDM for salt-and-pepper noise and comparable to the two methods for random-valued impulse noise. The PHA is comparable to the HQA in both recovered effect and computing consuming. Comparing with ALM and FTVDM, the PHA is faster and more accurate than ALM and FTVDM for salt-and-pepper noise and comparable to the two methods for random-valued impulse noise. Furthermore, the recovered images by the HQA and the PHA are less blocky. / In this thesis, we study two aspects in image processing. Part I is on the fast and efficient algorithms for the TV-L1 image restoration. Part II is on the fast and efficient algorithms for the positively constraint maximum penalized TV image restoration. / Part II of the thesis focuses on the positively constraint maximum penalized total variation image restoration. We develop and implement a multiplicative iteration approach for the positively constrained total variation image restoration. We call our algorithm MITV. The MITV algorithm is based on the multiplicative iterative algorithm originally developed for tomographic image reconstruction. The advantages of the MITV are that it is very easy to derive and implement under different image noise models and it respects the positivity constraint. Our method can be applied to kinds of noise models, the Gaussian noise model, Poisson noise model and the impulse noise model. In numerical test, we apply our algorithm to deblur images corrupted with Gaussian noise. The results show that our method give better restored images than the forward-backward splitting algorithm. / Liang, Haixia. / Adviser: Hon Fu Raymond Chan. / Source: Dissertation Abstracts International, Volume: 73-01, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 87-92). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.

Learning mid-level representations for scene understanding.

January 2013 (has links)
本論文包括了對場景分類框架的描述,并針對自然場景中學習中間層特徵表達的問題做了深入的探討。 / 當前的場景分類框架主要包括特徵提取,特稱編碼,空間信息整合和分類器學習幾個步驟。在這些步驟中,特徵提取是圖像理解的基礎環節。局部特徵表達被認為是計算機視覺在實際應用中成功的關鍵。但是近年來,中間層信息表達逐漸吸引了這個領域的眾多目光。本論文從兩個方面來理解中間層特徵。一個是局部底層信息的整合,另外一個是語義信息的嵌入。本文中,我們的工作同時覆蓋了“整合“和“語意“兩個方面。 / 在自然圖像的統計特徵中,我們發現圖像底層響應的相關性代表了局部結構信息。基於這個發現,我們構造了一個兩層學習模型。第一層是長得類似邊響應的底層信息,第二層是過完備的協方差特徵層,同時也是本文中提到的中間層信息。從“整合局部底層信息“的角度看,我們的方法在在這個方向上更進一步。我們將中間層特徵用到了場景分類中,并取得了良好的效果。特別是與人工設計的特徵相比,我們的特徵完全來自于自動學習。我們的協方差特徵的有效性為未來的特徵學習提供了一個新的思路:對於低層響應的相互關係的研究可以幫助構造表達能力更強的特徵。 / 爲了將語義信息加入到中間層特徵的學習中,我們定義了一個名詞叫做“信息化組分“。 所謂的信息化組分指的是那些能夠用來描述一類場景同時又能用來區分不同場景的結構化信息。基於固定秩的產生式模型的假設,我們設計了產生式模型和判別式分類器聯合學習的優化模型。通過將學習得到的信息化組分用到場景分類的實驗中,這類信息化結構的有效性得到了充分地證實。我們同時發現,如果將這一類信息化結構和底層的特徵表達聯合起來作為新的特徵表達,會使得分類的準確率得到進一步地提升。這個發現為我們未來的工作指引了方向:通過嘗試合併多層的特徵表達來提高整體的分類效果。 / This thesis contains the review of state-of-the-art scene classification frameworks and study about learning mid-level representations for scene understanding. / Current scene classification pipeline consists of feature extraction, feature encoding, spatial aggregation, and classifier learning. Among these steps, feature extraction is the most fundamental one for scene understanding. Beyond low level features, obtaining effective mid-level representations catches eyes in the scene understanding field in recent years. We interpret mid-level representations from two perspectives. One is the aggregation from low level cues and the other is embedding semantic information. In this thesis, our work harvests both properties of “aggregation“ and “semantic“. / Given the observation from natural image statistics that correlations among patch-level responses contain strong structure information, we build a two-layer model. The first layer is the patch level response with edge-let appearance, and the second layer contains sparse covariance patterns, which is considered as the mid-level representation. From the view of “aggregation from low level cues“, our work moves one step further in this direction. We use learned covariance patterns in scene classification. It shows promising performance even compared with those human-designed features. The efficiency of our covariance patterns gives a new clue for feature learning, that is, correlations among lower-layer responses can help build more powerful feature representations. / With the motivation of coupling semantic information into building the mid-level representation, we define a new “informative components“ term in this thesis. Informative components refer to those regions that are descriptive within one class and also distinctive among different classes. Based on a generative assumption that descriptive regions can fit a fixed rank model, we provide an integrated optimization framework, which combines generative modeling and discriminative learning together. Experiments on scene classification bear out the efficiency of our informative components. We also find that by simply concatenating informative components with low level responses, the classification performance can be further improved. This throws light on the future direction to improve representation power via the combination of multiple-layer representations. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Wang, Liwei. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 62-72). / Abstracts also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Scene Classification Pipeline --- p.1 / Chapter 1.2 --- Learning Mid-Level Representations --- p.6 / Chapter 1.3 --- Contributions and Organization --- p.7 / Chapter 2 --- Background --- p.9 / Chapter 2.1 --- Mid-level Representations --- p.9 / Chapter 2.1.1 --- Aggregation FromLow Level Cues --- p.10 / Chapter 2.1.2 --- Embedding Semantic Information --- p.13 / Chapter 2.2 --- Scene Data Sets Description --- p.16 / Chapter 3 --- Learning Sparse Covariance Patterns --- p.20 / Chapter 3.1 --- Introduction --- p.20 / Chapter 3.2 --- Model --- p.26 / Chapter 3.3 --- Learning and Inference --- p.28 / Chapter 3.3.1 --- Inference --- p.28 / Chapter 3.3.2 --- Learning --- p.30 / Chapter 3.4 --- Experiments --- p.31 / Chapter 3.4.1 --- Structure Mapping --- p.33 / Chapter 3.4.2 --- 15-Scene Classification --- p.34 / Chapter 3.4.3 --- Indoor Scene Recognition --- p.36 / Chapter 3.5 --- Summary --- p.38 / Chapter 4 --- Learning Informative Components --- p.39 / Chapter 4.1 --- Introduction --- p.39 / Chapter 4.2 --- RelatedWork --- p.43 / Chapter 4.3 --- OurModel --- p.45 / Chapter 4.3.1 --- Component Level Representation --- p.45 / Chapter 4.3.2 --- Fixed Rank Modeling --- p.46 / Chapter 4.3.3 --- Informative Component Learning --- p.47 / Chapter 4.4 --- Experiments --- p.52 / Chapter 4.4.1 --- Informative Components Learning --- p.54 / Chapter 4.4.2 --- Scene Classification --- p.55 / Chapter 4.5 --- Summary --- p.58 / Chapter 5 --- Conclusion --- p.60 / Bibliography --- p.62

Generalized image deblurring.

January 2013 (has links)
隨著數碼相機與移動照相設備的日益普及,現時的拍攝照片數量遠遠超過以前。數碼照相機的內在缺陷使得數字圖像還原領域得到廣泛的興趣。在本論文中,我們將研究圖像去模糊。圖像去模糊旨在從一張模糊的圖像恢復出清晰的圖像。它是一個在計算機視覺和圖形學有理論和實踐影響力的根本問題。單圖反卷積問題是一個十分挑戰的問題因為我們觀察到的信息比要恢復的信息要少。我們討論模糊核估計並分析為什麼現存的算法可以獲得成功。基於這些分析和理解,我們提出了一個創新的統一框架。該框架具有優異的圖像對模糊性能,並且只需使用很少的運算時間。這個框架還被擴展到了非均一的圖像去模糊上,並且取得與最先進算法相當的效果。 / 在現實模糊圖像中,模糊常常是非均一的,這種模糊具有更大的挑戰性。均一模糊的技術發展使得這個問題相對於以前較容易著手。在本論文中,我們對現存的相機抖動模型進行了詳細的研究並討論其中存在的一些問題。我們對相機模型進行歸納總結並且提出了基於每個平面的非均一圖像去模糊框架。基於這個框架,我們解決了一種特殊形式的模糊。這種模糊是產生於外平面運動,常見於用車載,體育和監控相機拍攝的照片。我們在具有挑戰性的網絡圖片和自己拍攝的圖片上進行測試,驗證了我們的方法的正確性。 / With the popularity of digital cameras and mobile phone cameras, much more photos are being taken nowadays than ever before. The imperfection of digital cameras arouses broad interest in digital image restoration. In this thesis, we study an important topic, i.e., image deblurring, which aims to recover a sharp image from only a blurry observation. It is one of the fundamental problems in computer vision and graphics with both theoretical and practical impact. Single image blind deconvolution is challenging since there are more unknowns than observations. We discuss problems involving blur kernel estimation and why state-ofthe-art methods work. These insights lead to a novel unified framework to achieve decent deblurring performance on publicly available datasets in faster speed. The extension of the framework to non-uniform image deblurring also achieves comparable performance to state-of-the-art methods. / Further, in real blurred images, it is quite often that blur is spatiallyvariant, which is very difficult to deal with. Advance in uniform deblurring makes this problem tractable. We make a detailed study of current camera shake models and discuss problems in these models. We also generalize the framework and propose a plane-wise non-uniform image deblurring framework. Based on it, we tackle a specific type of blur involving out-of-plane motion, which typically appears on photos captured using car, sport and surveillance camera. We validate our method on challenging photos obtained from internet and taken by ourselves. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Zheng, Shicheng. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 71-79). / Abstracts also in Chinese. / Abstract --- p.i / Acknowledgement --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation and Objectives --- p.1 / Chapter 1.2 --- Contributions --- p.5 / Chapter 1.3 --- Thesis Outline --- p.6 / Chapter 2 --- Background --- p.8 / Chapter 2.1 --- Non-blind Image Deconvolution --- p.8 / Chapter 2.2 --- Blind Deconvolution --- p.13 / Chapter 2.3 --- Non-uniform Image Deblurring --- p.14 / Chapter 3 --- Unnatural Representation For Natural Image Deblurring --- p.19 / Chapter 3.0.1 --- Analysis --- p.21 / Chapter 3.0.2 --- Our Contribution --- p.23 / Chapter 3.1 --- Framework --- p.24 / Chapter 3.2 --- Optimization --- p.28 / Chapter 3.2.1 --- Solve for k --- p.28 / Chapter 3.2.2 --- Solve for k{U+1D57}⁺¹ with l{U+1D57}+1 --- p.32 / Chapter 3.2.3 --- Final Image Restoration --- p.34 / Chapter 3.3 --- Discussion --- p.34 / Chapter 3.4 --- Experimental Results --- p.38 / Chapter 3.5 --- Concluding Remarks --- p.41 / Chapter 4 --- Forward Motion Deblurring --- p.43 / Chapter 4.1 --- Background --- p.45 / Chapter 4.2 --- OurModel --- p.51 / Chapter 4.3 --- Forward Motion Deblurring. --- p.55 / Chapter 4.3.1 --- Kernel and Image Restoration --- p.55 / Chapter 4.4 --- Implementation and Discussion --- p.58 / Chapter 4.5 --- Experimental Results --- p.59 / Chapter 4.6 --- Conclusion and Limitation --- p.64 / Chapter 5 --- Conclusion --- p.65 / Chapter A --- New Sparsity Function --- p.67 / Bibliography --- p.71

Pose tracking of multiple camera system.

January 2009 (has links)
Leung, Man Kin. / Thesis submitted in: October 2008. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 121-126). / Abstracts in English and Chinese. / Abstract --- p.ii / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Overview --- p.1 / Chapter 1.2 --- Motivation --- p.4 / Chapter 1.3 --- Contributions --- p.5 / Chapter 1.4 --- Organization of the thesis --- p.6 / Chapter 2 --- Literature review --- p.8 / Chapter 2.1 --- Introduction --- p.8 / Chapter 2.2 --- Background knowledge --- p.9 / Chapter 2.2.1 --- Pinhole camera model --- p.10 / Chapter 2.2.2 --- Kalman filter --- p.11 / Chapter 2.2.3 --- Extended Kalman filter --- p.14 / Chapter 2.2.4 --- Unscented Kalman filter --- p.15 / Chapter 2.3 --- Batch method --- p.19 / Chapter 2.3.1 --- Multiple view geometry --- p.19 / Chapter 2.3.2 --- Factorization --- p.21 / Chapter 2.3.3 --- Bundle adjustment --- p.22 / Chapter 2.4 --- Sequential method --- p.23 / Chapter 2.5 --- SLAM using cameras --- p.24 / Chapter 2.6 --- Summary --- p.26 / Chapter 3 --- Pose tracking of a stereo camera system --- p.27 / Chapter 3.1 --- Overview --- p.27 / Chapter 3.1.1 --- Related work --- p.27 / Chapter 3.1.2 --- Contribution --- p.29 / Chapter 3.2 --- Problem definition --- p.29 / Chapter 3.3 --- Algorithm --- p.31 / Chapter 3.3.1 --- Initialization --- p.33 / Chapter 3.3.2 --- Feature tracking and stereo correspondence matching --- p.33 / Chapter 3.3.3 --- Pose tracking based on two trifocal tensors --- p.35 / Chapter 3.3.4 --- Pose tracking using extended Kalman filter (Our EKF-2 approach) --- p.37 / Chapter 3.3.5 --- Pose tracking using unscented Kalman filter (Our UKF-2 approach) --- p.41 / Chapter 3.3.6 --- Pose tracking using differential evolution (Our DE-2 approach) --- p.44 / Chapter 3.4 --- Experiment --- p.49 / Chapter 3.4.1 --- Synthetic experiments --- p.49 / Chapter 3.4.2 --- Real experiments --- p.55 / Chapter 3.5 --- Summary --- p.67 / Chapter 4 --- Advance to two pairs of stereo cameras --- p.68 / Chapter 4.1 --- Overview --- p.68 / Chapter 4.1.1 --- Related work --- p.68 / Chapter 4.1.2 --- Contribution --- p.69 / Chapter 4.2 --- Problem definition --- p.70 / Chapter 4.3 --- Algorithm --- p.72 / Chapter 4.3.1 --- Initialization --- p.72 / Chapter 4.3.2 --- Feature tracking and stereo correspondence matching --- p.74 / Chapter 4.3.3 --- Pose tracking based on four trifocal tensors --- p.76 / Chapter 4.3.4 --- Pose tracking using extended Kalman filter (Our EKF-4 approach) --- p.79 / Chapter 4.3.5 --- Pose tracking using unscented Kalman filter (Our UKF-4 approach) --- p.84 / Chapter 4.4 --- Experiment --- p.87 / Chapter 4.4.1 --- Synthetic experiments --- p.87 / Chapter 4.4.2 --- Real experiments --- p.100 / Chapter 4.5 --- Summary --- p.113 / Chapter 5 --- Conclusion --- p.115 / Chapter 5.1 --- Conclusion --- p.115 / Chapter 5.2 --- Scope of Applications --- p.116 / Chapter 5.3 --- Limitations --- p.117 / Chapter 5.4 --- Difficulties --- p.118 / Chapter 5.5 --- Future work --- p.118 / Bibliography --- p.121

Representing and analyzing 3D digital shape using distance information /

Svensson, Stina. January 2001 (has links) (PDF)
Diss. (sammanfattning) Uppsala : Sveriges lantbruksuniv., 2001. / Härtill 8 uppsatser.

Variational image segmentation, inpainting and denoising

Li, Zhi 27 July 2016 (has links)
Variational methods have attracted much attention in the past decade. With rigorous mathematical analysis and computational methods, variational minimization models can handle many practical problems arising in image processing, such as image segmentation and image restoration. We propose a two-stage image segmentation approach for color images, in the first stage, the primal-dual algorithm is applied to efficiently solve the proposed minimization problem for a smoothed image solution without irrelevant and trivial information, then in the second stage, we adopt the hillclimbing procedure to segment the smoothed image. For multiplicative noise removal, we employ a difference of convex algorithm to solve the non-convex AA model. And we also improve the non-local total variation model. More precisely, we add an extra term to impose regularity to the graph formed by the weights between pixels. Thin structures can benefit from this regularization term, because it allows to adapt the weights value from the global point of view, thus thin features will not be overlooked like in the conventional non-local models. Since now the non-local total variation term has two variables, the image u and weights v, and it is concave with respect to v, the proximal alternating linearized minimization algorithm is naturally applied with variable metrics to solve the non-convex model efficiently. In the meantime, the efficiency of the proposed approaches is demonstrated on problems including image segmentation, image inpainting and image denoising.

Topics in image recovery and image quality assessment /Cui Lei.

Cui, Lei 16 November 2016 (has links)
Image recovery, especially image denoising and deblurring is widely studied during the last decades. Variational models can well preserve edges of images while restoring images from noise and blur. Some variational models are non-convex. For the moment, the methods for non-convex optimization are limited. This thesis finds new non-convex optimizing method called difference of convex algorithm (DCA) for solving different variational models for various kinds of noise removal problems. For imaging system, noise appeared in images can show different kinds of distribution due to the different imaging environment and imaging technique. Here we show how to apply DCA to Rician noise removal and Cauchy noise removal. The performance of our experiments demonstrates that our proposed non-convex algorithms outperform the existed ones by better PSNR and less computation time. The progress made by our new method can improve the precision of diagnostic technique by reducing Rician noise more efficiently and can improve the synthetic aperture radar imaging precision by reducing Cauchy noise within. When applying variational models to image denoising and deblurring, a significant subject is to choose the regularization parameters. Few methods have been proposed for regularization parameter selection for the moment. The numerical algorithms of existed methods for parameter selection are either complicated or implicit. In order to find a more efficient and easier way to estimate regularization parameters, we create a new image quality sharpness metric called SQ-Index which is based on the theory of Global Phase Coherence. The new metric can be used for estimating parameters for a various of variational models, but also can estimate the noise intensity based on special models. In our experiments, we show the noise estimation performance with this new metric. Moreover, extensive experiments are made for dealing with image denoising and deblurring under different kinds of noise and blur. The numerical results show the robust performance of image restoration by applying our metric to parameter selection for different variational models.

Page generated in 0.1409 seconds