• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • Tagged with
  • 12
  • 12
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Projected Barzilai-Borwein Method with Infeasible Iterates for Nonnegative Image Deconvolution

Fraser, Kathleen 22 July 2011 (has links)
The Barzilai-Borwein (BB) method for unconstrained optimization has attracted attention for its "chaotic" behaviour and fast convergence on image deconvolution problems. However, images with large areas of darkness, such as those often found in astronomy or microscopy, have been shown to benefit from approaches which impose a nonnegativity constraint on the pixel values. We present a new adaptation of the BB method which enforces a nonnegativity constraint by projecting the solution onto the feasible set, but allows for infeasible iterates between projections. We show that this approach results in faster convergence than the basic Projected Barzilai-Borwein (PBB) method, while achieving better quality images than the unconstrained BB method. We find that the new method also performs comparably to the Gradient Projection-Conjugate Gradient (GPCG) method, and in most test cases achieves a lower restoration error, despite being a much simpler algorithm.
2

Blind image deconvolution : nonstationary Bayesian approaches to restoring blurred photos

Bishop, Tom E. January 2009 (has links)
High quality digital images have become pervasive in modern scientific and everyday life — in areas from photography to astronomy, CCTV, microscopy, and medical imaging. However there are always limits to the quality of these images due to uncertainty and imprecision in the measurement systems. Modern signal processing methods offer the promise of overcoming some of these problems by postprocessing these blurred and noisy images. In this thesis, novel methods using nonstationary statistical models are developed for the removal of blurs from out of focus and other types of degraded photographic images. The work tackles the fundamental problem blind image deconvolution (BID); its goal is to restore a sharp image from a blurred observation when the blur itself is completely unknown. This is a “doubly illposed” problem — extreme lack of information must be countered by strong prior constraints about sensible types of solution. In this work, the hierarchical Bayesian methodology is used as a robust and versatile framework to impart the required prior knowledge. The thesis is arranged in two parts. In the first part, the BID problem is reviewed, along with techniques and models for its solution. Observation models are developed, with an emphasis on photographic restoration, concluding with a discussion of how these are reduced to the common linear spatially-invariant (LSI) convolutional model. Classical methods for the solution of illposed problems are summarised to provide a foundation for the main theoretical ideas that will be used under the Bayesian framework. This is followed by an indepth review and discussion of the various prior image and blur models appearing in the literature, and then their applications to solving the problem with both Bayesian and nonBayesian techniques. The second part covers novel restoration methods, making use of the theory presented in Part I. Firstly, two new nonstationary image models are presented. The first models local variance in the image, and the second extends this with locally adaptive noncausal autoregressive (AR) texture estimation and local mean components. These models allow for recovery of image details including edges and texture, whilst preserving smooth regions. Most existing methods do not model the boundary conditions correctly for deblurring of natural photographs, and a Chapter is devoted to exploring Bayesian solutions to this topic. Due to the complexity of the models used and the problem itself, there are many challenges which must be overcome for tractable inference. Using the new models, three different inference strategies are investigated: firstly using the Bayesian maximum marginalised a posteriori (MMAP) method with deterministic optimisation; proceeding with the stochastic methods of variational Bayesian (VB) distribution approximation, and simulation of the posterior distribution using the Gibbs sampler. Of these, we find the Gibbs sampler to be the most effective way to deal with a variety of different types of unknown blurs. Along the way, details are given of the numerical strategies developed to give accurate results and to accelerate performance. Finally, the thesis demonstrates state of the art results in blind restoration of synthetic and real degraded images, such as recovering details in out of focus photographs.
3

Variable Splitting as a Key to Efficient Image Reconstruction

Dolui, Sudipto January 2012 (has links)
The problem of reconstruction of digital images from their degraded measurements has always been a problem of central importance in numerous applications of imaging sciences. In real life, acquired imaging data is typically contaminated by various types of degradation phenomena which are usually related to the imperfections of image acquisition devices and/or environmental effects. Accordingly, given the degraded measurements of an image of interest, the fundamental goal of image reconstruction is to recover its close approximation, thereby "reversing" the effect of image degradation. Moreover, the massive production and proliferation of digital data across different fields of applied sciences creates the need for methods of image restoration which would be both accurate and computationally efficient. Developing such methods, however, has never been a trivial task, as improving the accuracy of image reconstruction is generally achieved at the expense of an elevated computational burden. Accordingly, the main goal of this thesis has been to develop an analytical framework which allows one to tackle a wide scope of image reconstruction problems in a computationally efficient manner. To this end, we generalize the concept of variable splitting, as a tool for simplifying complex reconstruction problems through their replacement by a sequence of simpler and therefore easily solvable ones. Moreover, we consider two different types of variable splitting and demonstrate their connection to a number of existing approaches which are currently used to solve various inverse problems. In particular, we refer to the first type of variable splitting as Bregman Type Splitting (BTS) and demonstrate its applicability to the solution of complex reconstruction problems with composite, cross-domain constraints. As specific applications of practical importance, we consider the problem of reconstruction of diffusion MRI signals from sub-critically sampled, incomplete data as well as the problem of blind deconvolution of medical ultrasound images. Further, we refer to the second type of variable splitting as Fuzzy Clustering Splitting (FCS) and show its application to the problem of image denoising. Specifically, we demonstrate how this splitting technique allows us to generalize the concept of neighbourhood operation as well as to derive a unifying approach to denoising of imaging data under a variety of different noise scenarios.
4

A Practical Solution for Eliminating Artificial Image Contrast in Aberration-Corrected TEM

Tanaka, Nobuo, Kondo, Yushi, Kawai, Tomoyuki, Yamasaki, Jun 02 1900 (has links)
No description available.
5

Variable Splitting as a Key to Efficient Image Reconstruction

Dolui, Sudipto January 2012 (has links)
The problem of reconstruction of digital images from their degraded measurements has always been a problem of central importance in numerous applications of imaging sciences. In real life, acquired imaging data is typically contaminated by various types of degradation phenomena which are usually related to the imperfections of image acquisition devices and/or environmental effects. Accordingly, given the degraded measurements of an image of interest, the fundamental goal of image reconstruction is to recover its close approximation, thereby "reversing" the effect of image degradation. Moreover, the massive production and proliferation of digital data across different fields of applied sciences creates the need for methods of image restoration which would be both accurate and computationally efficient. Developing such methods, however, has never been a trivial task, as improving the accuracy of image reconstruction is generally achieved at the expense of an elevated computational burden. Accordingly, the main goal of this thesis has been to develop an analytical framework which allows one to tackle a wide scope of image reconstruction problems in a computationally efficient manner. To this end, we generalize the concept of variable splitting, as a tool for simplifying complex reconstruction problems through their replacement by a sequence of simpler and therefore easily solvable ones. Moreover, we consider two different types of variable splitting and demonstrate their connection to a number of existing approaches which are currently used to solve various inverse problems. In particular, we refer to the first type of variable splitting as Bregman Type Splitting (BTS) and demonstrate its applicability to the solution of complex reconstruction problems with composite, cross-domain constraints. As specific applications of practical importance, we consider the problem of reconstruction of diffusion MRI signals from sub-critically sampled, incomplete data as well as the problem of blind deconvolution of medical ultrasound images. Further, we refer to the second type of variable splitting as Fuzzy Clustering Splitting (FCS) and show its application to the problem of image denoising. Specifically, we demonstrate how this splitting technique allows us to generalize the concept of neighbourhood operation as well as to derive a unifying approach to denoising of imaging data under a variety of different noise scenarios.
6

Efficient methodologies for single-image blind deconvolution and deblurring

Khan, Aftab January 2014 (has links)
The Blind Image Deconvolution/Deblurring (BID) problem was realised in the early 1960s but it still remains a challenging task for the image processing research community to find an efficient, reliable and most importantly a diversely applicable deblurring scheme. The main challenge arises from little or no prior information about the image or the blurring process as well as the lack of optimal restoration filters to reduce or completely eliminate the blurring effect. Moreover, restoration can be marred by the two common side effects of deblurring; namely the noise amplification and ringing artefacts that arise in the deblurred image due to an unrealizable or imperfect restoration filter. Also, developing a scheme that can process different types of blur, especially for real images, is yet to be realized to a satisfactory level. This research is focused on the development of blind restoration schemes for real life blurred images. The primary objective is to design a BID scheme that is robust in term of Point Spread Function (PSF) estimation, efficient in terms of restoration speed, and effective in terms of restoration quality. A desired scheme will require a deblurring measure to act as a feedback of quality regarding the deblurred image and lead the estimation of the blurring PSF. The blurred image and the estimated PSF can then be passed on to any classical restoration filter for deblurring. The deblurring measures presented in this research include blind non-Gaussianity measures as well as blind Image Quality Measures (IQMs). These measures are blind in the sense that they are able to gauge the quality of an image directly from it without the need to reference a high quality image. The non-Gaussianity measures include spatial and spectral kurtosis measures; while the image quality analysers include the Blind/Reference-less Image Spatial QUality Evaluator (BRISQUE), Natural Image Quality Evaluator (NIQE) index and Reblurring based Peak Signal to Noise Ratio (RPSNR) measure. BRISQUE, NIQE and spectral kurtosis, are introduced for the first time as deblurring measures for BID. RPSNR is a novel full reference yet blind IQM designed and used in this research work. Experiments were conducted on different image datasets and real life blurred images. Optimization of the BID schemes has been achieved using a gradient descent based scheme and a Genetic Algorithm (GA). Quantitative results based on full-reference and non-reference IQMs, present BRISQUE as a robust and computationally efficient blind feedback quality measure. Also, parametric and arbitrarily shaped (non-parametric or generic) PSFs were treated for the blind deconvolution of images. The parametric forms of PSF include uniform Gaussian, motion and out-of-focus blur. The arbitrarily shaped PSFs comprise blurs that have a much more complex blur shape which cannot be easily modelled in the parametric form. A novel scheme for arbitrarily shaped PSF estimation and blind deblurring has been designed, implemented and tested on artificial and real life blurred images. The scheme provides a unified base for the estimation of both parametric and arbitrarily shaped PSFs with the BRISQUE quality measure in conjunction with a GA. Full-reference and non-reference IQMs have been utilised to gauge the quality of deblurred images for the BID schemes. In the real BID case, only non-reference IQMs can be employed due to the unavailability of the reference high quality image. Quantitative results of these images depict the restoration ability of the BID scheme. The significance of the research work lies in the BID scheme‘s ability to handle parametric and arbitrarily shaped PSFs using a single algorithm, for single-shot blurred images, with enhanced optimization through the gradient descent scheme and GA in conjunction with multiple feedback IQMs.
7

Subspace Techniques for Parallel Magnetic Resonance Imaging

Gol Gungor, Derya 30 December 2014 (has links)
No description available.
8

VARIATIONAL METHODS FOR IMAGE DEBLURRING AND DISCRETIZED PICARD'S METHOD

Money, James H. 01 January 2006 (has links)
In this digital age, it is more important than ever to have good methods for processing images. We focus on the removal of blur from a captured image, which is called the image deblurring problem. In particular, we make no assumptions about the blur itself, which is called a blind deconvolution. We approach the problem by miniming an energy functional that utilizes total variation norm and a fidelity constraint. In particular, we extend the work of Chan and Wong to use a reference image in the computation. Using the shock filter as a reference image, we produce a superior result compared to existing methods. We are able to produce good results on non-black background images and images where the blurring function is not centro-symmetric. We consider using a general Lp norm for the fidelity term and compare different values for p. Using an analysis similar to Strong and Chan, we derive an adaptive scale method for the recovery of the blurring function. We also consider two numerical methods in this disseration. The first method is an extension of Picards method for PDEs in the discrete case. We compare the results to the analytical Picard method, showing the only difference is the use of the approximation versus exact derivatives. We relate the method to existing finite difference schemes, including the Lax-Wendroff method. We derive the stability constraints for several linear problems and illustrate the stability region is increasing. We conclude by showing several examples of the method and how the computational savings is substantial. The second method we consider is a black-box implementation of a method for solving the generalized eigenvalue problem. By utilizing the work of Golub and Ye, we implement a routine which is robust against existing methods. We compare this routine against JDQZ and LOBPCG and show this method performs well in numerical testing.
9

Segmentation and Deconvolution of Fluorescence Microscopy Volumes

Soonam Lee (6738881) 14 August 2019 (has links)
<div>Recent advances in optical microscopy have enabled biologists collect fluorescence microscopy volumes cellular and subcellular structures of living tissue. This results in collecting large datasets of microscopy volume and needs image processing aided automated quantification method. To quantify biological structures a first and fundamental step is segmentation. Yet, the quantitative analysis of the microscopy volume is hampered by light diffraction, distortion created by lens aberrations in different directions, complex variation of biological structures. This thesis describes several proposed segmentation methods to identify various biological structures such as nuclei or tubules observed in fluorescence microscopy volumes. To achieve nuclei segmentation, multiscale edge detection method and 3D active contours with inhomogeneity correction method are used for segmenting nuclei. Our proposed 3D active contours with inhomogeneity correction method utilizes 3D microscopy volume information while addressing intensity inhomogeneity across vertical and horizontal directions. To achieve tubules segmentation, ellipse model fitting to tubule boundary method and convolutional neural networks with inhomogeneity correction method are performed. More specifically, ellipse fitting method utilizes a combination of adaptive and global thresholding, potentials, z direction refinement, branch pruning, end point matching, and boundary fitting steps to delineate tubular objects. Also, the deep learning based method combines intensity inhomogeneity correction, data augmentation, followed by convolutional neural networks architecture. Moreover, this thesis demonstrates a new deconvolution method to improve microscopy image quality without knowing the 3D point spread function using a spatially constrained cycle-consistent adversarial networks. The results of proposed methods are visually and numerically compared with other methods. Experimental results demonstrate that our proposed methods achieve better performance than other methods for nuclei/tubules segmentation as well as deconvolution.</div>
10

Amélioration de la résolution spatiale d’une image hyperspectrale par déconvolution et séparation-déconvolution conjointes / Spatial resolution improvement of hyperspectral images by deconvolution and joint unmixing-deconvolution

Song, Yingying 13 December 2018 (has links)
Une image hyperspectrale est un cube de données 3D dont chaque pixel fournit des informations spectrales locales sur un grand nombre de bandes contiguës sur une scène d'intérêt. Les images observées peuvent subir une dégradation due à l'instrument de mesure, avec pour conséquence l'apparition d'un flou sur les images qui se modélise par une opération de convolution. La déconvolution d'image hyperspectrale (HID) consiste à enlever le flou pour améliorer au mieux la résolution spatiale des images. Un critère de HID du type Tikhonov avec contrainte de non-négativité est proposé dans la thèse de Simon Henrot. Cette méthode considère les termes de régularisations spatiale et spectrale dont la force est contrôlée par deux paramètres de régularisation. La première partie de cette thèse propose le critère de courbure maximale MCC et le critère de distance minimum MDC pour estimer automatiquement ces paramètres de régularisation en formulant le problème de déconvolution comme un problème d'optimisation multi-objectif. La seconde partie de cette thèse propose l'algorithme de LMS avec un bloc lisant régularisé (SBR-LMS) pour la déconvolution en ligne des images hyperspectrales fournies par les systèmes de whiskbroom et pushbroom. L'algorithme proposé prend en compte la non-causalité du noyau de convolution et inclut des termes de régularisation non quadratiques tout en maintenant une complexité linéaire compatible avec le traitement en temps réel dans les applications industrielles. La troisième partie de cette thèse propose des méthodes de séparation-déconvolution conjointes basés sur le critère de Tikhonov en contextes hors-ligne ou en-ligne. L'ajout d'une contrainte de non-négativité permet d’améliorer leurs performances / A hyperspectral image is a 3D data cube in which every pixel provides local spectral information about a scene of interest across a large number of contiguous bands. The observed images may suffer from degradation due to the measuring device, resulting in a convolution or blurring of the images. Hyperspectral image deconvolution (HID) consists in removing the blurring to improve the spatial resolution of images at best. A Tikhonov-like HID criterion with non-negativity constraint is considered here. This method considers separable spatial and spectral regularization terms whose strength are controlled by two regularization parameters. First part of this thesis proposes the maximum curvature criterion MCC and the minimum distance criterion MDC to automatically estimate these regularization parameters by formulating the deconvolution problem as a multi-objective optimization problem. The second part of this thesis proposes the sliding block regularized (SBR-LMS) algorithm for the online deconvolution of hypserspectral images as provided by whiskbroom and pushbroom scanning systems. The proposed algorithm accounts for the convolution kernel non-causality and including non-quadratic regularization terms while maintaining a linear complexity compatible with real-time processing in industrial applications. The third part of this thesis proposes joint unmixing-deconvolution methods based on the Tikhonov criterion in both offline and online contexts. The non-negativity constraint is added to improve their performances

Page generated in 0.3291 seconds