• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 4
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Blind image deconvolution : nonstationary Bayesian approaches to restoring blurred photos

Bishop, Tom E. January 2009 (has links)
High quality digital images have become pervasive in modern scientific and everyday life — in areas from photography to astronomy, CCTV, microscopy, and medical imaging. However there are always limits to the quality of these images due to uncertainty and imprecision in the measurement systems. Modern signal processing methods offer the promise of overcoming some of these problems by postprocessing these blurred and noisy images. In this thesis, novel methods using nonstationary statistical models are developed for the removal of blurs from out of focus and other types of degraded photographic images. The work tackles the fundamental problem blind image deconvolution (BID); its goal is to restore a sharp image from a blurred observation when the blur itself is completely unknown. This is a “doubly illposed” problem — extreme lack of information must be countered by strong prior constraints about sensible types of solution. In this work, the hierarchical Bayesian methodology is used as a robust and versatile framework to impart the required prior knowledge. The thesis is arranged in two parts. In the first part, the BID problem is reviewed, along with techniques and models for its solution. Observation models are developed, with an emphasis on photographic restoration, concluding with a discussion of how these are reduced to the common linear spatially-invariant (LSI) convolutional model. Classical methods for the solution of illposed problems are summarised to provide a foundation for the main theoretical ideas that will be used under the Bayesian framework. This is followed by an indepth review and discussion of the various prior image and blur models appearing in the literature, and then their applications to solving the problem with both Bayesian and nonBayesian techniques. The second part covers novel restoration methods, making use of the theory presented in Part I. Firstly, two new nonstationary image models are presented. The first models local variance in the image, and the second extends this with locally adaptive noncausal autoregressive (AR) texture estimation and local mean components. These models allow for recovery of image details including edges and texture, whilst preserving smooth regions. Most existing methods do not model the boundary conditions correctly for deblurring of natural photographs, and a Chapter is devoted to exploring Bayesian solutions to this topic. Due to the complexity of the models used and the problem itself, there are many challenges which must be overcome for tractable inference. Using the new models, three different inference strategies are investigated: firstly using the Bayesian maximum marginalised a posteriori (MMAP) method with deterministic optimisation; proceeding with the stochastic methods of variational Bayesian (VB) distribution approximation, and simulation of the posterior distribution using the Gibbs sampler. Of these, we find the Gibbs sampler to be the most effective way to deal with a variety of different types of unknown blurs. Along the way, details are given of the numerical strategies developed to give accurate results and to accelerate performance. Finally, the thesis demonstrates state of the art results in blind restoration of synthetic and real degraded images, such as recovering details in out of focus photographs.
2

Efficient methodologies for single-image blind deconvolution and deblurring

Khan, Aftab January 2014 (has links)
The Blind Image Deconvolution/Deblurring (BID) problem was realised in the early 1960s but it still remains a challenging task for the image processing research community to find an efficient, reliable and most importantly a diversely applicable deblurring scheme. The main challenge arises from little or no prior information about the image or the blurring process as well as the lack of optimal restoration filters to reduce or completely eliminate the blurring effect. Moreover, restoration can be marred by the two common side effects of deblurring; namely the noise amplification and ringing artefacts that arise in the deblurred image due to an unrealizable or imperfect restoration filter. Also, developing a scheme that can process different types of blur, especially for real images, is yet to be realized to a satisfactory level. This research is focused on the development of blind restoration schemes for real life blurred images. The primary objective is to design a BID scheme that is robust in term of Point Spread Function (PSF) estimation, efficient in terms of restoration speed, and effective in terms of restoration quality. A desired scheme will require a deblurring measure to act as a feedback of quality regarding the deblurred image and lead the estimation of the blurring PSF. The blurred image and the estimated PSF can then be passed on to any classical restoration filter for deblurring. The deblurring measures presented in this research include blind non-Gaussianity measures as well as blind Image Quality Measures (IQMs). These measures are blind in the sense that they are able to gauge the quality of an image directly from it without the need to reference a high quality image. The non-Gaussianity measures include spatial and spectral kurtosis measures; while the image quality analysers include the Blind/Reference-less Image Spatial QUality Evaluator (BRISQUE), Natural Image Quality Evaluator (NIQE) index and Reblurring based Peak Signal to Noise Ratio (RPSNR) measure. BRISQUE, NIQE and spectral kurtosis, are introduced for the first time as deblurring measures for BID. RPSNR is a novel full reference yet blind IQM designed and used in this research work. Experiments were conducted on different image datasets and real life blurred images. Optimization of the BID schemes has been achieved using a gradient descent based scheme and a Genetic Algorithm (GA). Quantitative results based on full-reference and non-reference IQMs, present BRISQUE as a robust and computationally efficient blind feedback quality measure. Also, parametric and arbitrarily shaped (non-parametric or generic) PSFs were treated for the blind deconvolution of images. The parametric forms of PSF include uniform Gaussian, motion and out-of-focus blur. The arbitrarily shaped PSFs comprise blurs that have a much more complex blur shape which cannot be easily modelled in the parametric form. A novel scheme for arbitrarily shaped PSF estimation and blind deblurring has been designed, implemented and tested on artificial and real life blurred images. The scheme provides a unified base for the estimation of both parametric and arbitrarily shaped PSFs with the BRISQUE quality measure in conjunction with a GA. Full-reference and non-reference IQMs have been utilised to gauge the quality of deblurred images for the BID schemes. In the real BID case, only non-reference IQMs can be employed due to the unavailability of the reference high quality image. Quantitative results of these images depict the restoration ability of the BID scheme. The significance of the research work lies in the BID scheme‘s ability to handle parametric and arbitrarily shaped PSFs using a single algorithm, for single-shot blurred images, with enhanced optimization through the gradient descent scheme and GA in conjunction with multiple feedback IQMs.
3

VARIATIONAL METHODS FOR IMAGE DEBLURRING AND DISCRETIZED PICARD'S METHOD

Money, James H. 01 January 2006 (has links)
In this digital age, it is more important than ever to have good methods for processing images. We focus on the removal of blur from a captured image, which is called the image deblurring problem. In particular, we make no assumptions about the blur itself, which is called a blind deconvolution. We approach the problem by miniming an energy functional that utilizes total variation norm and a fidelity constraint. In particular, we extend the work of Chan and Wong to use a reference image in the computation. Using the shock filter as a reference image, we produce a superior result compared to existing methods. We are able to produce good results on non-black background images and images where the blurring function is not centro-symmetric. We consider using a general Lp norm for the fidelity term and compare different values for p. Using an analysis similar to Strong and Chan, we derive an adaptive scale method for the recovery of the blurring function. We also consider two numerical methods in this disseration. The first method is an extension of Picards method for PDEs in the discrete case. We compare the results to the analytical Picard method, showing the only difference is the use of the approximation versus exact derivatives. We relate the method to existing finite difference schemes, including the Lax-Wendroff method. We derive the stability constraints for several linear problems and illustrate the stability region is increasing. We conclude by showing several examples of the method and how the computational savings is substantial. The second method we consider is a black-box implementation of a method for solving the generalized eigenvalue problem. By utilizing the work of Golub and Ye, we implement a routine which is robust against existing methods. We compare this routine against JDQZ and LOBPCG and show this method performs well in numerical testing.
4

Sur des méthodes préservant les structures d'une classe de matrices structurées / On structure-preserving methods of a class of structured matrices

Ben Kahla, Haithem 14 December 2017 (has links)
Les méthodes d'algèbres linéaire classiques, pour le calcul de valeurs et vecteurs propres d'une matrice, ou des approximations de rangs inférieurs (low-rank approximations) d'une solution, etc..., ne tiennent pas compte des structures de matrices. Ces dernières sont généralement détruites durant le procédé du calcul. Des méthodes alternatives préservant ces structures font l'objet d'un intérêt important par la communauté. Cette thèse constitue une contribution dans ce domaine. La décomposition SR peut être calculé via l'algorithme de Gram-Schmidt symplectique. Comme dans le cas classique, une perte d'orthogonalité peut se produire. Pour y remédier, nous avons proposé deux algorithmes RSGSi et RMSGSi qui consistent à ré-orthogonaliser deux fois les vecteurs à calculer. La perte de la J-orthogonalité s'est améliorée de manière très significative. L'étude directe de la propagation des erreurs d'arrondis dans les algorithmes de Gram-Schmidt symplectique est très difficile à effectuer. Nous avons réussi à contourner cette difficulté et donner des majorations pour la perte de la J-orthogonalité et de l'erreur de factorisation. Une autre façon de calculer la décomposition SR est basée sur les transformations de Householder symplectique. Un choix optimal a abouti à l'algorithme SROSH. Cependant, ce dernier peut être sujet à une instabilité numérique. Nous avons proposé une version modifiée nouvelle SRMSH, qui a l'avantage d'être aussi stable que possible. Une étude approfondie a été faite, présentant les différentes versions : SRMSH et SRMSH2. Dans le but de construire un algorithme SR, d'une complexité d'ordre O(n³) où 2n est la taille de la matrice, une réduction (appropriée) de la matrice à une forme condensée (J(Hessenberg forme) via des similarités adéquates, est cruciale. Cette réduction peut être effectuée via l'algorithme JHESS. Nous avons montré qu'il est possible de réduire une matrice sous la forme J-Hessenberg, en se basant exclusivement sur les transformations de Householder symplectiques. Le nouvel algorithme, appelé JHSJ, est basé sur une adaptation de l'algorithme SRSH. Nous avons réussi à proposer deux nouvelles variantes, aussi stables que possible : JHMSH et JHMSH2. Nous avons constaté que ces algorithmes se comportent d'une manière similaire à l'algorithme JHESS. Une caractéristique importante de tous ces algorithmes est qu'ils peuvent rencontrer un breakdown fatal ou un "near breakdown" rendant impossible la suite des calculs, ou débouchant sur une instabilité numérique, privant le résultat final de toute signification. Ce phénomène n'a pas d'équivalent dans le cas Euclidien. Nous avons réussi à élaborer une stratégie très efficace pour "guérir" le breakdown fatal et traîter le near breakdown. Les nouveaux algorithmes intégrant cette stratégie sont désignés par MJHESS, MJHSH, JHM²SH et JHM²SH2. Ces stratégies ont été ensuite intégrées dans la version implicite de l'algorithme SR lui permettant de surmonter les difficultés rencontrées lors du fatal breakdown ou du near breakdown. Rappelons que, sans ces stratégies, l'algorithme SR s'arrête. Finalement, et dans un autre cadre de matrices structurées, nous avons présenté un algorithme robuste via FFT et la matrice de Hankel, basé sur le calcul approché de plus grand diviseur commun (PGCD) de deux polynômes, pour résoudre le problème de la déconvolution d'images. Plus précisément, nous avons conçu un algorithme pour le calcul du PGCD de deux polynômes bivariés. La nouvelle approche est basée sur un algorithme rapide, de complexité quadratique O(n²), pour le calcul du PGCD des polynômes unidimensionnels. La complexité de notre algorithme est O(n²log(n)) où la taille des images floues est n x n. Les résultats expérimentaux avec des images synthétiquement floues illustrent l'efficacité de notre approche. / The classical linear algebra methods, for calculating eigenvalues and eigenvectors of a matrix, or lower-rank approximations of a solution, etc....do not consider the structures of matrices. Such structures are usually destroyed in the numerical process. Alternative structure-preserving methods are the subject of an important interest mattering to the community. This thesis establishes a contribution in this field. The SR decomposition is usually implemented via the symplectic Gram-Schmidt algorithm. As in the classical case, a loss of orthogonality can occur. To remedy this, we have proposed two algorithms RSGSi and RMSGSi, where the reorthogonalization of a current set of vectors against the previously computed set is performed twice. The loss of J-orthogonality has significantly improved. A direct rounding error analysis of symplectic Gram-Schmidt algorithm is very hard to accomplish. We managed to get around this difficulty and give the error bounds on the loss of the J-orthogonality and on the factorization. Another way to implement the SR decomposition is based on symplectic Householder transformations. An optimal choice of free parameters provided an optimal version of the algorithm SROSH. However, the latter may be subject to numerical instability. We have proposed a new modified version SRMSH, which has the advantage of being numerically more stable. By a detailes study, we are led to two new variants numerically more stables : SRMSH and SRMSH2. In order to build a SR algorithm of complexity O(n³), where 2n is the size of the matrix, a reduction to the condensed matrix form (upper J-Hessenberg form) via adequate similarities is crucial. This reduction may be handled via the algorithm JHESS. We have shown that it is possible to perform a reduction of a general matrix, to an upper J-Hessenberg form, based only on the use of symplectic Householder transformations. The new algorithm, which will be called JHSH algorithm, is based on an adaptation of SRSH algorithm. We are led to two news variants algorithms JHMSH and JHMSH2 which are significantly more stable numerically. We found that these algortihms behave quite similarly to JHESS algorithm. The main drawback of all these algorithms (JHESS, JHMSH, JHMSH2) is that they may encounter fatal breakdowns or may suffer from a severe form of near-breakdowns, causing a brutal stop of the computations, the algorithm breaks down, or leading to a serious numerical instability. This phenomenon has no equivalent in the Euclidean case. We sketch out a very efficient strategy for curing fatal breakdowns and treating near breakdowns. Thus, the new algorithms incorporating this modification will be referred to as MJHESS, MJHSH, JHM²SH and JHM²SH2. These strategies were then incorporated into the implicit version of the SR algorithm to overcome the difficulties encountered by the fatal breakdown or near-breakdown. We recall that without these strategies, the SR algorithms breaks. Finally ans in another framework of structured matrices, we presented a robust algorithm via FFT and a Hankel matrix, based on computing approximate greatest common divisors (GCD) of polynomials, for solving the problem pf blind image deconvolution. Specifically, we designe a specialized algorithm for computing the GCD of bivariate polynomials. The new algorithm is based on the fast GCD algorithm for univariate polynomials , of quadratic complexity O(n²) flops. The complexitiy of our algorithm is O(n²log(n)) where the size of blurred images is n x n. The experimental results with synthetically burred images are included to illustrate the effectiveness of our approach

Page generated in 0.261 seconds