Spelling suggestions: "subject:"deconvolution"" "subject:"econvolution""
31 |
Solutions to linear problems in aberrated optical systemsShain, William Jacob 09 October 2018 (has links)
Linear problems are possibly the kindest problems in physics and mathematics. Given sufficient information, the linear equations describing such problems are intrinsically solvable. The solution can be written as a vector having undergone a linear transformation in a vector space; extracting the solution is simply a matter of inverting the transformation. In an ideal optical system, the problem of extracting the object under investigation would be well defined, and the solution trivial to implement. However, real optical systems are all aberrated in some way, and these aberrations obfuscate the information, scrambling it and rendering it inextricable. The process of detangling the object from the aberrated system is no longer a trivial problem or even a uniquely solvable one, and represents one of the great challenges in optics today. This thesis provides a review of the theory behind optical microscopy in the presence of absent information, an architecture for the modern physical and computational methods used to solve the linear inversion problem, and three distinct application spaces of relevance. I hope you find it useful.
|
32 |
Parametric deconvolution for a common heteroscedastic caseRutikanga, Justin Ushize January 2016 (has links)
>Magister Scientiae - MSc / There exists an extensive statistics literature dealing with non-parametric deconvolution, the estimation of the underlying population probability density when sample values are subject to measurement errors. In parametric deconvolution, on the other hand, the data are known to be from a specific distribution. In this case the parameters of the distribution can be estimated by e.g. maximum likelihood. In realistic cases the measurement errors may be heteroscedastic and there may be unknown parameters associated with the distribution. The specific realistic case is investigated in which the measurement error standard deviation is proportional to the true sample values. In this case it is shown that the method of moment’s estimation is particularly simple. Estimation by maximum likelihood is computationally very expensive, since numerical integration needs to be performed for each data point, for each evaluation of the likelihood function. Method of moment’s estimation sometimes fails to give physically meaningful estimates. The origin of this problem lies in the large sampling variations of the third moment. Possible remedies are considered. Due to the fact that a convolution integral needed to be calculated for each data point, and that this has to be repeated for each iteration towards the solution, maximum likelihood computing cost is very high. New preliminary work suggests that saddle point approximations could sometimes be used for the convolution integrals. This allows much larger datasets to be dealt with. Application of the theory is illustrated with simulation and real data.
|
33 |
Positron Emission Tomography (PET) Tumor Segmentation and Quantification: Development of New AlgorithmsBhatt, Ruchir N 09 November 2012 (has links)
Tumor functional volume (FV) and its mean activity concentration (mAC) are the quantities derived from positron emission tomography (PET). These quantities are used for estimating radiation dose for a therapy, evaluating the progression of a disease and also use it as a prognostic indicator for predicting outcome. PET images have low resolution, high noise and affected by partial volume effect (PVE). Manually segmenting each tumor is very cumbersome and very hard to reproduce. To solve the above problem I developed an algorithm, called iterative deconvolution thresholding segmentation (IDTS) algorithm; the algorithm segment the tumor, measures the FV, correct for the PVE and calculates mAC. The algorithm corrects for the PVE without the need to estimate camera’s point spread function (PSF); also does not require optimizing for a specific camera. My algorithm was tested in physical phantom studies, where hollow spheres (0.5-16 ml) were used to represent tumors with a homogeneous activity distribution. It was also tested on irregular shaped tumors with a heterogeneous activity profile which were acquired using physical and simulated phantom. The physical phantom studies were performed with different signal to background ratios (SBR) and with different acquisition times (1-5 min). The algorithm was applied on ten clinical data where the results were compared with manual segmentation and fixed percentage thresholding method called T50 and T60 in which 50% and 60% of the maximum intensity respectively is used as threshold. The average error in FV and mAC calculation was 30% and -35% for 0.5 ml tumor. The average error FV and mAC calculation were ~5% for 16 ml tumor. The overall FV error was ~10% for heterogeneous tumors in physical and simulated phantom data. The FV and mAC error for clinical image compared to manual segmentation was around -17% and 15% respectively. In summary my algorithm has potential to be applied on data acquired from different cameras as its not dependent on knowing the camera’s PSF. The algorithm can also improve dose estimation and treatment planning.
|
34 |
Déconvolution d'images en radioastronomie centimétrique pour l'exploitation des nouveaux interféromètres radio : caractérisation du milieu non thermique des amas de galaxies / Deconvolution of images in centimeter-band radio astronomy for the exploitation of new radio interferometers : characterization of non thermal components in galaxy clustersDabbech, Arwa 28 April 2015 (has links)
Dans le cadre de la préparation du Square Kilometre Array (SKA), le plus large radio interféromètre au monde, de nouveaux défis de traitement d'images sont à relever. En effet, les données fournies par SKA auront un débit énorme, nécessitant ainsi un traitement en temps réel. En outre, grâce à sa résolution et sa sensibilité sans précédent, les observations seront dotées d'une très forte dynamique sur des champs de vue très grands. De nouvelles méthodes de traitement d'images robustes, efficaces et automatisées sont alors exigées. L'objectif de la thèse consiste à développer une nouvelle méthode permettant la restauration du modèle de l'image du ciel à partir des observations. La méthode est conçue pour l'estimation des images de très forte dynamique avec une attention particulière à restaurer les émissions étendues et faibles en intensité, souvent noyées dans les lobes secondaires de la PSF et le bruit. L'approche proposée est basée sur les représentations parcimonieuses, nommée MORESANE. L'image du ciel est modélisée comme étant la superposition de sources, qui constitueront les atomes d'un dictionnaire de synthèse inconnu, ce dernier sera estimé par des a priori d'analyses. Les résultats obtenus sur des simulations réalistes montrent que MORESANE est plus performant que les outils standards et très compétitifs avec les méthodes récemment proposées dans la littérature. MORESANE est appliqué sur des simulations d'observations d'amas de galaxies avec SKA1 afin d'investiguer la détectabilité du milieu non thermique intra-amas. Nos résultats indiquent que cette émission, avec SKA, sera étudiée jusqu'à l'époque de la formation des amas de galaxies massifs. / Within the framework of the preparation for the Square Kilometre Array (SKA), that is the world largest radio telescope, new imaging challenges has to be conquered. The data acquired by SKA will have to be processed on real time because of their huge rate. In addition, thanks to its unprecedented resolution and sensitivity, SKA images will have very high dynamic range over wide fields of view. Hence, there is an urgent need for the design of new imaging techniques that are robust and efficient and fully automated. The goal of this thesis is to develop a new technique aiming to reconstruct a model image of the radio sky from the radio observations. The method have been designed to estimate images with high dynamic range with a particular attention to recover faint extended emission usually completely buried in the PSF sidelobes of the brighter sources and the noise. We propose a new approach, based on sparse representations, called MORESANE. The radio sky is assumed to be a summation of sources, considered as atoms of an unknown synthesis dictionary. These atoms are learned using analysis priors from the observed image. Results obtained on realistic simulations show that MORESANE is very promising in the restoration of radio images; it is outperforming the standard tools and very competitive with the newly proposed methods in the literature. MORESANE is also applied on simulations of observations using the SKA1 with the aim to investigate the detectability of the intracluster non thermal component. Our results indicate that these diffuse sources, characterized by very low surface brightness will be investigated up to the epoch of massive cluster formation with the SKA.
|
35 |
Enhancement of the Signal-to-Noise Ratio in Sonic Logging Waveforms by Seismic InterferometryAldawood, Ali 04 1900 (has links)
Sonic logs are essential tools for reliably identifying interval velocities which, in
turn, are used in many seismic processes. One problem that arises, while logging, is
irregularities due to washout zones along the borehole surfaces that scatters the transmitted energy and hence weakens the signal recorded at the receivers. To alleviate
this problem, I have extended the theory of super-virtual refraction interferometry to
enhance the signal-to-noise ratio (SNR) sonic waveforms. Tests on synthetic and real
data show noticeable signal-to-noise ratio (SNR) enhancements of refracted P-wave
arrivals in the sonic waveforms.
The theory of super-virtual interferometric stacking is composed of two redatuming steps followed by a stacking procedure. The first redatuming procedure is of
correlation type, where traces are correlated together to get virtual traces with the
sources datumed to the refractor. The second datuming step is of convolution type,
where traces are convolved together to dedatum the sources back to their original
positions. The stacking procedure following each step enhances the signal to noise
ratio of the refracted P-wave first arrivals.
Datuming with correlation and convolution of traces introduces severe artifacts
denoted as correlation artifacts in super-virtual data. To overcome this problem, I replace the datuming with correlation step by datuming with deconvolution. Although
the former datuming method is more robust, the latter one reduces the artifacts
significantly. Moreover, deconvolution can be a noise amplifier which is why a regularization term is utilized, rendering the datuming with deconvolution more stable.
Tests of datuming with deconvolution instead of correlation with synthetic and real
data examples show significant reduction of these artifacts. This is especially true
when compared with the conventional way of applying the super-virtual refraction
interferometry method.
|
36 |
Dual-View Inverted Selective Plane Illumination Microscopy (diSPIM) Imaging for Accurate 3D Digital PathologyJanuary 2020 (has links)
archives@tulane.edu / For decades, histopathology and cytology have provided the reference standard for cancer diagnosis, prognosis prediction and treatment decisions. However, they are limited to 2D slices, which are created via cutting and/or smearing, thus not faithfully representing the true 3D structures of the cellular or tissue material. Multiple imaging methods have been utilized for non-destructive histologic imaging of tissues, but are usually limited by varying combinations of low resolution, low penetration depth, or a relatively slow imaging speed, and all suffer from anisotropic resolution, which could distort 3D tissue architectural renderings and thus hinder new work to analyze and quantify 3D tissue microarchitecture. Therefore, there is a clear need for a non-destructive imaging tool that can accurately represent the 3D structures of the tissue or cellular architecture, with comparable qualities and features as traditional histopathology.
In this work, dual-view inverted selective plane illumination microscopy (diSPIM) has been customized and optimized for fast, 3D imaging of large biospecimens. Imaging contrast of highly scattering samples has been further improved by adding confocal detection and/or structured illumination (SI) as additional optional imaging modes. A pipeline of dual-view imaging and processing has also been developed to achieve more isotropic 3D resolution, specifically on DRAQ5 and eosin (D&E) stained large (millimeter to centimeter size) biopsies.
To determine the impact of 3D, high-resolution imaging on clinical diagnostic endpoints, multiple prostate cancer (PCa) biopsies have been collected, imaged with diSPIM, and evaluated by pathologists. It has been found that the pathologist is “equally” confident on the PCa diagnosis from viewing 3D volumes and 2D slices, and the diagnostic agreement between 3D volumes is significantly higher than 2D slices.
The high-resolution and large-volume coverage of diSPIM may also help verify results from other lower-resolution modalities by serving as a 3D histology surrogate. Tissue correlations have been found between images acquired by diSPIM and photo-acoustic imaging, or by diSPIM and biodynamic imaging, proving diSPIM as a useful tool to aid in validation of lower-resolution imaging tools. The potential of diSPIM imaging has also been demonstrated in other applications, such as in the study of in-vitro neural models. / 1 / Bihe Hu
|
37 |
Signal Processing on Digitized Ladar Waveforms for Enhanced Resolution on Surface EdgesNeilsen, Kevin D. 01 May 2011 (has links)
Automatic target recognition (ATR) relies on images from various sensors including 3-D imaging ladar. The accuracy of recognizing a target is highly dependent on the number of points on the target. The highest spatial frequencies of a target are located on edges. Therefore, a higher sampling density is desirable at these locations. A ladar receiver captures information on edges by detecting two surfaces when the beam lands partially on one surface and partially on another if the distance between the surfaces is greater than the temporal pulse width of the laser.
In recent years, the ability to digitize the intensity of the light seen at the ladar receiver has led to digitized ladar waveforms that can be post-processed. Post-processing the data allows signal processing techniques to be implemented on stored waveforms. The digitized waveform provides more information than simply a range from the sensor to the target and the intensity of received light. Complex surfaces change the shape of the return.
This thesis exploits this information to enhance the resolution on the edges of targets in the 3-D image or point cloud. First, increased range resolution is obtained by means of deconvolution. This allows two surfaces to be detected even if the distance between them is less than the width of the transmitted pulse. Second, the locations of multiple returns within the ladar beam footprint are computed.
Using deconvolution on the received waveform, an increase from 30 cm to 14 cm in range resolution is reported. Error on these measurements has a 2 cm standard deviation. A method for estimating the width of a 19 cm slot was reported to have a standard deviation of 3.44 cm. A method for angle estimation from a single waveform was developed. This method showed a 1.4° standard deviation on a 75° surface. Processed point clouds show sharper edges than the originals.
The processing method presented in this thesis enhances the resolution on the edges of targets where it is needed. As a result, the high spatial frequency content of edges is better represented. While ATR applications may benefit from this thesis, other applications such as 3-D object modeling may benefit from better representation of edges as well.
|
38 |
Nonparametric And Empirical Bayes Estimation MethodsBenhaddou, Rida 01 January 2013 (has links)
In the present dissertation, we investigate two different nonparametric models; empirical Bayes model and functional deconvolution model. In the case of the nonparametric empirical Bayes estimation, we carried out a complete minimax study. In particular, we derive minimax lower bounds for the risk of the nonparametric empirical Bayes estimator for a general conditional distribution. This result has never been obtained previously. In order to attain optimal convergence rates, we use a wavelet series based empirical Bayes estimator constructed in Pensky and Alotaibi (2005). We propose an adaptive version of this estimator using Lepski’s method and show that the estimator attains optimal convergence rates. The theory is supplemented by numerous examples. Our study of the functional deconvolution model expands results of Pensky and Sapatinas (2009, 2010, 2011) to the case of estimating an (r + 1)-dimensional function or dependent errors. In both cases, we derive minimax lower bounds for the integrated square risk over a wide set of Besov balls and construct adaptive wavelet estimators that attain those optimal convergence rates. In particular, in the case of estimating a periodic (r + 1)-dimensional function, we show that by choosing Besov balls of mixed smoothness, we can avoid the ”curse of dimensionality” and, hence, obtain higher than usual convergence rates when r is large. The study of deconvolution of a multivariate function is motivated by seismic inversion which can be reduced to solution of noisy two-dimensional convolution equations that allow to draw inference on underground layer structures along the chosen profiles. The common practice in seismology is to recover layer structures separately for each profile and then to combine the derived estimates into a two-dimensional function. By studying the two-dimensional version of the model, we demonstrate that this strategy usually leads to estimators which are less accurate than the ones obtained as two-dimensional functional deconvolutions. Finally, we consider a multichannel deconvolution model with long-range dependent Gaussian errors. We do not limit our consideration to a specific type of long-range dependence, rather we assume that the eigenvalues of the covariance matrix of the errors are bounded above and below. We show that convergence rates of the estimators depend on a balance between the smoothness parameters of the response function, the iii smoothness of the blurring function, the long memory parameters of the errors, and how the total number of observations is distributed among the channels.
|
39 |
Modeling the Spatially Varying Point Spread Function of the Kirkpatrick-Baez OpticAdelman, Nathan 01 June 2018 (has links) (PDF)
Lawrence Livermore National Laboratory's (LLNL) National Ignition Facility (NIF) uses a variety of diagnostics and image capturing optics for collecting data in High Energy Density Physics (HEDP) experiments. However, every image capturing system causes blurring and degradation of the images captured. This degradation can be mathematically described through a camera system's Point Spread Function (PSF), and can be reversed if the system's PSF is known. This is deconvolution, also called image restoration. Many PSFs can be determined experimentally by imaging a point source, which is a light emitting object that appears infinitesimally small to the camera. However, NIF's Kirkpatrick-Baez Optic (KBO) is more difficult to characterize because it has a spatially-varying PSF. Spatially varying PSFs make deconvolution much more difficult because instead of being 2-dimensional, a spatially varying PSF is 4-dimensional. This work discusses a method used for modeling the KBO's PSF by modeling it as the sum of products of two basis functions. This model assumes separability of the four dimensions of the PSF into two, 2-dimensional basis functions. While previous work would assume parametric forms for some of the basis functions, this work attempts to only use numeric representations of the basis functions. Previous work also ignores the possibility of non-linear magnification along each image axis, whereas this work successfully characterizes the KBO's non-linear magnification. Implementation of this model gives exceptional results, with the correlation coefficient between a model generated image and an experimental image as high as 0.9994. Modeling the PSF with high accuracy lays the groundwork to allow for deconvolution of images generated by the KBO.
|
40 |
PSF Sampling in Fluorescence Image DeconvolutionInman, Eric A 01 March 2023 (has links) (PDF)
All microscope imaging is largely affected by inherent resolution limitations because of out-of-focus light and diffraction effects. The traditional approach to restoring the image resolution is to use a deconvolution algorithm to “invert” the effect of convolving the volume with the point spread function. However, these algorithms fall short in several areas such as noise amplification and stopping criterion. In this paper, we try to reconstruct an explicit volumetric representation of the fluorescence density in the sample and fit a neural network to the target z-stack to properly minimize a reconstruction cost function for an optimal result. Additionally, we do a weighted sampling of the point spread function to avoid unnecessary computations and prioritize non-zero signals. In a baseline comparison against the Richardson-Lucy method, our algorithm outperforms RL for images affected with high levels of noise.
|
Page generated in 0.1113 seconds