• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 129
  • 23
  • 22
  • 20
  • 16
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 265
  • 42
  • 41
  • 37
  • 34
  • 33
  • 31
  • 30
  • 30
  • 27
  • 26
  • 23
  • 22
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Optimum deconvolution of seismic transients: A model-based signal processing approach

Schutz, Kerry D. January 1994 (has links)
No description available.
42

The Blind Deconvolution of Linearly Blurred Images using non-Parametric Stabilizing Functions

Hare, James 08 1900 (has links)
An iterative solution to the problem of blind image deconvolution is presented whereby a previous image estimate is explicitly used in the new image estimation process. The previous image is pre-filtered using an adaptive, non-parametric stabilizing function that is updated based on a current error estimate. This function is experimentally shown to dramatically benefit the convergence rate for the a priori restoration case. Noise propagation from one iteration to the next is reduced by the use of a second, regularizing operator, resulting in a hybrid iteration technique. Further, error terms are developed that shed new light on the error propagation properties of this method and others by quantifying the extent of noise and regularization error propagation. Optimal non-parametric, frequency adaptive stabilizing and regularization functions are then derived based on this error analysis. / Thesis / Master of Engineering (ME)
43

Deconvolution of seismic data using extremal skew and kurtosis

Vafidis, Antonios. January 1984 (has links)
No description available.
44

Nonparametric And Empirical Bayes Estimation Methods

Benhaddou, Rida 01 January 2013 (has links)
In the present dissertation, we investigate two different nonparametric models; empirical Bayes model and functional deconvolution model. In the case of the nonparametric empirical Bayes estimation, we carried out a complete minimax study. In particular, we derive minimax lower bounds for the risk of the nonparametric empirical Bayes estimator for a general conditional distribution. This result has never been obtained previously. In order to attain optimal convergence rates, we use a wavelet series based empirical Bayes estimator constructed in Pensky and Alotaibi (2005). We propose an adaptive version of this estimator using Lepski’s method and show that the estimator attains optimal convergence rates. The theory is supplemented by numerous examples. Our study of the functional deconvolution model expands results of Pensky and Sapatinas (2009, 2010, 2011) to the case of estimating an (r + 1)-dimensional function or dependent errors. In both cases, we derive minimax lower bounds for the integrated square risk over a wide set of Besov balls and construct adaptive wavelet estimators that attain those optimal convergence rates. In particular, in the case of estimating a periodic (r + 1)-dimensional function, we show that by choosing Besov balls of mixed smoothness, we can avoid the ”curse of dimensionality” and, hence, obtain higher than usual convergence rates when r is large. The study of deconvolution of a multivariate function is motivated by seismic inversion which can be reduced to solution of noisy two-dimensional convolution equations that allow to draw inference on underground layer structures along the chosen profiles. The common practice in seismology is to recover layer structures separately for each profile and then to combine the derived estimates into a two-dimensional function. By studying the two-dimensional version of the model, we demonstrate that this strategy usually leads to estimators which are less accurate than the ones obtained as two-dimensional functional deconvolutions. Finally, we consider a multichannel deconvolution model with long-range dependent Gaussian errors. We do not limit our consideration to a specific type of long-range dependence, rather we assume that the eigenvalues of the covariance matrix of the errors are bounded above and below. We show that convergence rates of the estimators depend on a balance between the smoothness parameters of the response function, the iii smoothness of the blurring function, the long memory parameters of the errors, and how the total number of observations is distributed among the channels.
45

PSF Sampling in Fluorescence Image Deconvolution

Inman, Eric A 01 March 2023 (has links) (PDF)
All microscope imaging is largely affected by inherent resolution limitations because of out-of-focus light and diffraction effects. The traditional approach to restoring the image resolution is to use a deconvolution algorithm to “invert” the effect of convolving the volume with the point spread function. However, these algorithms fall short in several areas such as noise amplification and stopping criterion. In this paper, we try to reconstruct an explicit volumetric representation of the fluorescence density in the sample and fit a neural network to the target z-stack to properly minimize a reconstruction cost function for an optimal result. Additionally, we do a weighted sampling of the point spread function to avoid unnecessary computations and prioritize non-zero signals. In a baseline comparison against the Richardson-Lucy method, our algorithm outperforms RL for images affected with high levels of noise.
46

Modeling the Spatially Varying Point Spread Function of the Kirkpatrick-Baez Optic

Adelman, Nathan 01 June 2018 (has links) (PDF)
Lawrence Livermore National Laboratory's (LLNL) National Ignition Facility (NIF) uses a variety of diagnostics and image capturing optics for collecting data in High Energy Density Physics (HEDP) experiments. However, every image capturing system causes blurring and degradation of the images captured. This degradation can be mathematically described through a camera system's Point Spread Function (PSF), and can be reversed if the system's PSF is known. This is deconvolution, also called image restoration. Many PSFs can be determined experimentally by imaging a point source, which is a light emitting object that appears infinitesimally small to the camera. However, NIF's Kirkpatrick-Baez Optic (KBO) is more difficult to characterize because it has a spatially-varying PSF. Spatially varying PSFs make deconvolution much more difficult because instead of being 2-dimensional, a spatially varying PSF is 4-dimensional. This work discusses a method used for modeling the KBO's PSF by modeling it as the sum of products of two basis functions. This model assumes separability of the four dimensions of the PSF into two, 2-dimensional basis functions. While previous work would assume parametric forms for some of the basis functions, this work attempts to only use numeric representations of the basis functions. Previous work also ignores the possibility of non-linear magnification along each image axis, whereas this work successfully characterizes the KBO's non-linear magnification. Implementation of this model gives exceptional results, with the correlation coefficient between a model generated image and an experimental image as high as 0.9994. Modeling the PSF with high accuracy lays the groundwork to allow for deconvolution of images generated by the KBO.
47

Amélioration de la résolution spatiale d’une image hyperspectrale par déconvolution et séparation-déconvolution conjointes / Spatial resolution improvement of hyperspectral images by deconvolution and joint unmixing-deconvolution

Song, Yingying 13 December 2018 (has links)
Une image hyperspectrale est un cube de données 3D dont chaque pixel fournit des informations spectrales locales sur un grand nombre de bandes contiguës sur une scène d'intérêt. Les images observées peuvent subir une dégradation due à l'instrument de mesure, avec pour conséquence l'apparition d'un flou sur les images qui se modélise par une opération de convolution. La déconvolution d'image hyperspectrale (HID) consiste à enlever le flou pour améliorer au mieux la résolution spatiale des images. Un critère de HID du type Tikhonov avec contrainte de non-négativité est proposé dans la thèse de Simon Henrot. Cette méthode considère les termes de régularisations spatiale et spectrale dont la force est contrôlée par deux paramètres de régularisation. La première partie de cette thèse propose le critère de courbure maximale MCC et le critère de distance minimum MDC pour estimer automatiquement ces paramètres de régularisation en formulant le problème de déconvolution comme un problème d'optimisation multi-objectif. La seconde partie de cette thèse propose l'algorithme de LMS avec un bloc lisant régularisé (SBR-LMS) pour la déconvolution en ligne des images hyperspectrales fournies par les systèmes de whiskbroom et pushbroom. L'algorithme proposé prend en compte la non-causalité du noyau de convolution et inclut des termes de régularisation non quadratiques tout en maintenant une complexité linéaire compatible avec le traitement en temps réel dans les applications industrielles. La troisième partie de cette thèse propose des méthodes de séparation-déconvolution conjointes basés sur le critère de Tikhonov en contextes hors-ligne ou en-ligne. L'ajout d'une contrainte de non-négativité permet d’améliorer leurs performances / A hyperspectral image is a 3D data cube in which every pixel provides local spectral information about a scene of interest across a large number of contiguous bands. The observed images may suffer from degradation due to the measuring device, resulting in a convolution or blurring of the images. Hyperspectral image deconvolution (HID) consists in removing the blurring to improve the spatial resolution of images at best. A Tikhonov-like HID criterion with non-negativity constraint is considered here. This method considers separable spatial and spectral regularization terms whose strength are controlled by two regularization parameters. First part of this thesis proposes the maximum curvature criterion MCC and the minimum distance criterion MDC to automatically estimate these regularization parameters by formulating the deconvolution problem as a multi-objective optimization problem. The second part of this thesis proposes the sliding block regularized (SBR-LMS) algorithm for the online deconvolution of hypserspectral images as provided by whiskbroom and pushbroom scanning systems. The proposed algorithm accounts for the convolution kernel non-causality and including non-quadratic regularization terms while maintaining a linear complexity compatible with real-time processing in industrial applications. The third part of this thesis proposes joint unmixing-deconvolution methods based on the Tikhonov criterion in both offline and online contexts. The non-negativity constraint is added to improve their performances
48

Development of optimized deconvoluted coincidence doppler broadening spectroscopy and deep level transient spectroscopies with applicationsto various semiconductor materials

Zhang, Jingdong., 張敬東. January 2006 (has links)
published_or_final_version / abstract / Physics / Doctoral / Doctor of Philosophy
49

Blind image deconvolution : nonstationary Bayesian approaches to restoring blurred photos

Bishop, Tom E. January 2009 (has links)
High quality digital images have become pervasive in modern scientific and everyday life — in areas from photography to astronomy, CCTV, microscopy, and medical imaging. However there are always limits to the quality of these images due to uncertainty and imprecision in the measurement systems. Modern signal processing methods offer the promise of overcoming some of these problems by postprocessing these blurred and noisy images. In this thesis, novel methods using nonstationary statistical models are developed for the removal of blurs from out of focus and other types of degraded photographic images. The work tackles the fundamental problem blind image deconvolution (BID); its goal is to restore a sharp image from a blurred observation when the blur itself is completely unknown. This is a “doubly illposed” problem — extreme lack of information must be countered by strong prior constraints about sensible types of solution. In this work, the hierarchical Bayesian methodology is used as a robust and versatile framework to impart the required prior knowledge. The thesis is arranged in two parts. In the first part, the BID problem is reviewed, along with techniques and models for its solution. Observation models are developed, with an emphasis on photographic restoration, concluding with a discussion of how these are reduced to the common linear spatially-invariant (LSI) convolutional model. Classical methods for the solution of illposed problems are summarised to provide a foundation for the main theoretical ideas that will be used under the Bayesian framework. This is followed by an indepth review and discussion of the various prior image and blur models appearing in the literature, and then their applications to solving the problem with both Bayesian and nonBayesian techniques. The second part covers novel restoration methods, making use of the theory presented in Part I. Firstly, two new nonstationary image models are presented. The first models local variance in the image, and the second extends this with locally adaptive noncausal autoregressive (AR) texture estimation and local mean components. These models allow for recovery of image details including edges and texture, whilst preserving smooth regions. Most existing methods do not model the boundary conditions correctly for deblurring of natural photographs, and a Chapter is devoted to exploring Bayesian solutions to this topic. Due to the complexity of the models used and the problem itself, there are many challenges which must be overcome for tractable inference. Using the new models, three different inference strategies are investigated: firstly using the Bayesian maximum marginalised a posteriori (MMAP) method with deterministic optimisation; proceeding with the stochastic methods of variational Bayesian (VB) distribution approximation, and simulation of the posterior distribution using the Gibbs sampler. Of these, we find the Gibbs sampler to be the most effective way to deal with a variety of different types of unknown blurs. Along the way, details are given of the numerical strategies developed to give accurate results and to accelerate performance. Finally, the thesis demonstrates state of the art results in blind restoration of synthetic and real degraded images, such as recovering details in out of focus photographs.
50

Power Analysis in Applied Linear Regression for Cell Type-Specific Differential Expression Detection

Glass, Edmund 01 January 2016 (has links)
The goal of many human disease-oriented studies is to detect molecular mechanisms different between healthy controls and patients. Yet, commonly used gene expression measurements from any tissues suffer from variability of cell composition. This variability hinders the detection of differentially expressed genes and is often ignored. However, this variability may actually be advantageous, as heterogeneous gene expression measurements coupled with cell counts may provide deeper insights into the gene expression differences on the cell type-specific level. Published computational methods use linear regression to estimate cell type-specific differential expression. Yet, they do not consider many artifacts hidden in high-dimensional gene expression data that may negatively affect the performance of linear regression. In this dissertation we specifically address the parameter space involved in the most rigorous use of linear regression to estimate cell type-specific differential expression and report under which conditions significant detection is probable. We define parameters affecting the sensitivity of cell type-specific differential expression estimation as follows: sample size, cell type-specific proportion variability, mean squared error (spread of observations around linear regression line), conditioning of the cell proportions predictor matrix, and the size of actual cell type-specific differential expression. Each parameter, with the exception of cell type-specific differential expression (effect size), affects the variability of cell type-specific differential expression estimates. We have developed a power-analysis approach to cell type by cell type and genomic site by site differential expression detection which relies upon Welch’s two-sample t-test and factors in differences in cell type-specific expression estimate variability and reduces false discovery. To this end we have published an R package, LRCDE, available in GitHub (http://www.github.com/ERGlass/lrcde.dev) which outputs observed statistics of cell type-specific differential expression, including two-sample t- statistic, t-statistic p-value, and power calculated from two-sample t-statistic on a genomic site- by-site basis.

Page generated in 0.145 seconds