• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 49
  • 49
  • 49
  • 14
  • 12
  • 10
  • 10
  • 9
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Contribution to fluorescence microscopy, 3D thick samples deconvolution and depth-variant PSF

Maalouf, Elie 20 December 2010 (has links) (PDF)
The 3-D fluorescence microscope has become the method of choice in biological sciences for living cells study. However, the data acquired with conventional3-D fluorescence microscopy are not quantitatively significant because of distortions induced by the optical acquisition process. Reliable measurements need the correction of theses distortions. Knowing the instrument impulse response, also known as the PSF, one can consider the backward process of convolution induced by the microscope, known as "deconvolution". However, when the system response is not invariant in the observation field, the classical algorithms can introduce large errors in the results. In this thesis we propose a new approach, which can be easily adapted to any classical deconvolution algorithm, direct or iterative, for bypassing the non-invariance PSF problem, without any modification to the later. Based on the hypothesis that the minimal error in a restored image using non-invariance assumption is located near the used PSF position, the EMMA (Evolutive Merging Masks Algorithm) blends multiple deconvolutions in the invariance assumption using a specific merging mask set. In order to obtain sufficient number of measured PSF at various depths for a better restoration using EMMA (or any other depth-variant deconvolution algorithm) we propose a 3D PSF interpolation algorithm based on the image moments theory using Zernike polynomials as decomposition base. The known PSF are decomposed into Zernike moments set and each moment's variation is fitted into a polynomial function, the resulting functions are then used to interpolate the needed PSF's Zernike moments set to reconstruct the interpolated PSF.
2

Development of Human Brain Sodium Magnetic Resonance Imaging (23Na MRI) Methods

Polak, Paul January 2022 (has links)
Sodium (23Na) plays a critical role in all organisms – it is crucial in cellular homeostasis, pH regulation and action potential propagation in muscle and neuronal fibres. Healthy cells have a low intracellular 23Na and high extracellular concentration, with the sodium-potassium pump maintaining this sodium gradient. In the human brain approximately 50% of its total energy consumption is occupied by maintenance of this gradient, demonstrating the pump’s importance in health. A failure of the sodium-potassium pump leads to cellular apoptosis and ultimately necrosis, with potentially disastrous results for neurological function. Magnetic resonance imaging (MRI) of 23Na is of great interest because of the ubiquity of sodium in cellular processes. However, it is hampered by many technical challenges. Among these are a low gyromagnetic ratio, short T2∗ relaxation times, and low concentrations all of which lead to long acquisitions in order to account for the poor inherent signal. In addition, 23Na MRI requires specialized hardware, non-standard pulse sequences and reconstruction methods in order to create images. These have all contributed to render clinical applications for 23Na MRI virtually non-existent, despite research indicating sodium’s role in various neurological disorders, including multiple sclerosis, Alzheimer’s, stroke, cancer, and traumatic brain injury. This work is motivated by a desire to use 23Na MRI in clinical settings. To that end, hardware and software methods were initially developed to process sodium images. In order to quantify the imaging system the point-spread function (PSF) and the related modulation transfer function (MTF) were calculated with the aid of a 3D-printed resolution phantom with different 23Na concentrations in gelatin. Two pulse sequences, density-adapted projection reconstruction (DA-3DPR) and Fermat looped orthogonally encoded trajectories (FLORET), with similar acquisition times were tested. Reconstructions were performed with the non-uniform fast Fourier transform. Results indicated a full-width, half-maximum (FWHM) value of 1.8 for DA-3DPR and 2.3 for FLORET. In a follow-up study, simulation experiments were added to various sodium phantom concentrations in 3% agar. The simulations indicated high potential variability in the MTF calculations depending on the methodology, while the phantom experiments found a FWHMs of 2.0 (DA-3DPR), and 2.5 (FLORET). Diffusion tensor imaging (DTI) is an MRI technique with wide adoption for the assessment of a variety of neurological disorders. Combining DTI with 23Na MRI could provide novel insight into brain pathology; however, a study with a healthy population is warranted before examinations with other populations. Fifteen subjects were scanned with DTI and sodium MRI, and the latter was used to derive voxel-wise tissue sodium concentration (TSC). Regional grey and white matter (WM) TSC was analyzed and compared to fractional anisotropy (FA) and cerebrospinal fluid (CSF) proximity. Results indicated that WM voxels proximal to CSF regions (i.e. corpus callosum) could have lower than expected FA values and higher measured TSC, with an inverse correlation between TSC and distance to CSF. This is likely the result of the broad PSF of 23Na MRI, as regions distal to CSF did not exhibit this phenomenon. This potentially represents a confounding effect when interpreting sodium concentrations, especially in regions proximal to the high 23Na content in CSF. / Thesis / Doctor of Philosophy (PhD)
3

Contribution to fluorescence microscopy, 3D thick samples deconvolution and depth-variant PSF / Contribution à la microscopie de fluorescence, Deconvolution des échantillons épais avec PSF variables en profondeur

Maalouf, Elie 20 December 2010 (has links)
La reconstruction 3D par coupes sériées en microscopie optique est un moyen efficace pour étudier des spécimens biologiques fluorescents. Dans un tel système, la formation d'une image peut être représentée comme une convolution linéaire d'un objet avec une réponse impulsionnelle optique de l'instrument (PSF). Pour une étude quantitative, une estimation de l'objet doit être calculée en utilisant la déconvolution qui est le phénomène inverse de la convolution. Plusieurs algorithmes de déconvolution ont été développés en se basant sur des modèles statistiques ou par inversion directe, mais ces algorithmes se basent sur la supposition de l'invariance spatiale de la PSF pour simplifier et accélérer le processus. Dans certaines configurations optiques la PSF 3D change significativement en profondeur et ignorer ces changements implique des erreurs quantitatives dans l'estimation. Nous proposons un algorithme (EMMA) qui se base sur une hypothèse où l'erreur minimale sur l'estimation par un algorithme ne tenant pas compte de la non-invariance, se situe aux alentours de la position (profondeur) de la PSF utilisée. EMMA utilise des PSF à différentes positions et fusionne les différentes estimations en utilisant des masques d'interpolation linéaires adaptatifs aux positions des PSF utilisées. Pour obtenir des PSF à différentes profondeurs, un algorithme d'interpolation de PSF a également été développé. La méthode consiste à décomposer les PSF mesurées en utilisant les moments de Zernike pseudo-3D, puis les variations de chaque moment sont approximés par une fonction polynomiale. Ces fonctions polynomiales sont utilisées pour interpoler des PSF aux profondeurs voulues. / The 3-D fluorescence microscope has become the method of choice in biological sciences for living cells study. However, the data acquired with conventional3-D fluorescence microscopy are not quantitatively significant because of distortions induced by the optical acquisition process. Reliable measurements need the correction of theses distortions. Knowing the instrument impulse response, also known as the PSF, one can consider the backward process of convolution induced by the microscope, known as "deconvolution". However, when the system response is not invariant in the observation field, the classical algorithms can introduce large errors in the results. In this thesis we propose a new approach, which can be easily adapted to any classical deconvolution algorithm, direct or iterative, for bypassing the non-invariance PSF problem, without any modification to the later. Based on the hypothesis that the minimal error in a restored image using non-invariance assumption is located near the used PSF position, the EMMA (Evolutive Merging Masks Algorithm) blends multiple deconvolutions in the invariance assumption using a specific merging mask set. In order to obtain sufficient number of measured PSF at various depths for a better restoration using EMMA (or any other depth-variant deconvolution algorithm) we propose a 3D PSF interpolation algorithm based on the image moments theory using Zernike polynomials as decomposition base. The known PSF are decomposed into Zernike moments set and each moment's variation is fitted into a polynomial function, the resulting functions are then used to interpolate the needed PSF's Zernike moments set to reconstruct the interpolated PSF.
4

Diffraction efficiency and aberrations of diffractive elements obtained from orthogonal expansion of the point spread function

Schwiegerling, Jim 27 September 2016 (has links)
The Point Spread Function (PSF) indirectly encodes the wavefront aberrations of an optical system and therefore is a metric of the system performance. Analysis of the PSF properties is useful in the case of diffractive optics where the wavefront emerging from the exit pupil is not necessarily continuous and consequently not well represented by traditional wavefront error descriptors such as Zernike polynomials. The discontinuities in the wavefront from diffractive optics occur in cases where step heights in the element are not multiples of the illumination wavelength. Examples include binary or N-step structures, multifocal elements where two or more foci are intentionally created or cases where other wavelengths besides the design wavelength are used. Here, a technique for expanding the electric field amplitude of the PSF into a series of orthogonal functions is explored. The expansion coefficients provide insight into the diffraction efficiency and aberration content of diffractive optical elements. Furthermore, this technique is more broadly applicable to elements with a finite number of diffractive zones, as well as decentered patterns.
5

Extend Depth Of Field From A Lens System Using A Phase Mask

Hsu, Chun-hsiang 08 July 2009 (has links)
A method using a phase mask to extend the depth of field for an incoherent lens system is presented. This phase mask is designed to generate a point spread function in which the intensity distribution is invariant to misfocus. Thus, image could be retrieved by de-convoluting the misfocused one. Its application to 3D profile sensing using point white light illumination is presented as well. A fringe pattern is projected onto the inspected surface using the point white light source. Fringe distribution is then observed by a CCD camera through the presented phase mask at a different viewpoint. Phase can be extracted by the Fourier transform method or the phase-shifting technique. With triangulation methods or proper calibration approaches, depth information can be identified from the phase of the fringes. The phase mask enlarges the depth of field of the image acquisition system, while the point white light illumination increases the depth of focus of the fringe projection system. Thus, a highly accurate, non-scanning projected fringe profilometer with large depth measuring range can be realized.
6

Budget d’erreur en optique adaptative : Simulation numérique haute performance et modélisation dans la perspective des ELT / Adaptive optics error breakdown : high performance numerical simulation and modeling for future ELT

Moura Ferreira, Florian 11 October 2018 (has links)
D'ici quelques années, une nouvelle classe de télescopes verra le jour : celle des télescopes géants. Ceux-ci se caractériseront par un diamètre supérieur à 20m, et jusqu'à 39m pour le représentant européen, l'Extremely Large Telescope (ELT). Seulement, l'atmosphère terrestre vient dégrader sévèrement les images obtenues lors d'observations au sol : la résolution de ces télescopes est alors réduite à celle d'un télescope amateur de quelques dizaines de centimètres de diamètre.L'optique adaptative (OA) devient alors essentielle. Cette dernière permet de corriger en temps-réel les perturbations induites par l'atmosphère et de retrouver la résolution théorique du télescope. Néanmoins, les systèmes d'OA ne sont pas exempt de tout défaut, et une erreur résiduelle persiste sur le front d'onde (FO) et impacte la qualité des images obtenues. Cette dernière est dépendante de la Fonction d'Étalement de Point (FEP) de l'instrument utilisé, et la FEP d'un système d'OA dépend elle-même de l'erreur résiduelle de FO. L'identification et la compréhension des sources d'erreurs est alors primordiale. Dans la perspective de ces télescopes géants, le dimensionnement des systèmes d'OA nécessaires devient tel que ces derniers représentent un challenge technologique et technique. L'un des aspects à considérer est la complexité numérique de ces systèmes. Dès lors, les techniques de calcul de haute performance deviennent nécessaires, comme la parallélisation massive. Le General Purpose Graphical Processing Unit (GPGPU) permet d'utiliser un processeur graphique à cette fin, celui-ci possédant plusieurs milliers de coeurs de calcul utilisables, contre quelques dizaines pour un processeur classique.Dans ce contexte, cette thèse s'articule autour de trois parties. La première présente le développement de COMPASS, un outil de simulation haute performance bout-en-bout dédié à l'OA, notamment à l'échelle des ELT. Tirant pleinement parti des capacités de calcul des GPU, COMPASS permet alors de simuler une OA ELT en quelques minutes. La seconde partie fait état du développement de ROKET : un estimateur complet du budget d'erreur d'un système d'OA intégré à COMPASS, permettant ainsi d'étudier statistiquement les différentes sources d'erreurs et leurs éventuels liens. Enfin, des modèles analytiques des différentes sources d'erreur sont dérivés et permettent de proposer un algorithme d'estimation de la FEP. Les possibilités d'applications sur le ciel de cet algorithme sont également discutées. / In a few years, a new class of giants telescopes will appear. The diameter of those telescope will be larger than 20m, up to 39m for the european Extremely Large Telescope (ELT). However, images obtained from ground-based observations are severely impacted by the atmosphere. Then, the resolution of those giants telescopes is equivalent to the one obtained with an amateur telescope of a few tens of centimeters of diameter.Therefore, adaptive optics (AO) becomes essential as it aims to correct in real-time the disturbance due to the atmospherical turbulence and to retrieve the theoretical resolution of the telescope. Nevertheless, AO systems are not perfect: a wavefront residual error remains and still impacts the image quality. The latter is measured by the point spread function (PSF) of the system, and this PSF depends on the wavefront residual error. Hence, identifying and understanding the various contributors of the AO residual error is primordial.For those extremely large telescopes, the dimensioning of their AO systems is challenging. In particular, the numerical complexity impacts the numerical simulation tools useful for the AO design. High performance computing techniques are needed, as such relying on massive parallelization.General Purpose Graphical Processing Unit (GPGPU) enables the use of GPU for this purpose. This architecture is suitable for massive parallelization as it leverages GPU's several thousand of cores, instead of a few tens for classical CPU.In this context, this PhD thesis is composed of three parts. In the first one, it presents the development of COMPASS : a GPU-based high performance end-to-end simulation tool for AO systems that is suitable for ELT scale. The performance of the latter allows simulating AO systems for the ELT in a few minutes. In a second part, an error breakdown estimation tool, ROKET, is added to the end-to-end simulation in order to study the various contributors of the AO residual error. Finally, an analytical model is proposed for those error contributors, leading to a new way to estimate the PSF. Possible on-sky applications are also discussed.
7

Modeling the Spatially Varying Point Spread Function of the Kirkpatrick-Baez Optic

Adelman, Nathan 01 June 2018 (has links) (PDF)
Lawrence Livermore National Laboratory's (LLNL) National Ignition Facility (NIF) uses a variety of diagnostics and image capturing optics for collecting data in High Energy Density Physics (HEDP) experiments. However, every image capturing system causes blurring and degradation of the images captured. This degradation can be mathematically described through a camera system's Point Spread Function (PSF), and can be reversed if the system's PSF is known. This is deconvolution, also called image restoration. Many PSFs can be determined experimentally by imaging a point source, which is a light emitting object that appears infinitesimally small to the camera. However, NIF's Kirkpatrick-Baez Optic (KBO) is more difficult to characterize because it has a spatially-varying PSF. Spatially varying PSFs make deconvolution much more difficult because instead of being 2-dimensional, a spatially varying PSF is 4-dimensional. This work discusses a method used for modeling the KBO's PSF by modeling it as the sum of products of two basis functions. This model assumes separability of the four dimensions of the PSF into two, 2-dimensional basis functions. While previous work would assume parametric forms for some of the basis functions, this work attempts to only use numeric representations of the basis functions. Previous work also ignores the possibility of non-linear magnification along each image axis, whereas this work successfully characterizes the KBO's non-linear magnification. Implementation of this model gives exceptional results, with the correlation coefficient between a model generated image and an experimental image as high as 0.9994. Modeling the PSF with high accuracy lays the groundwork to allow for deconvolution of images generated by the KBO.
8

Image Degradation Due To Surface Scattering In The Presence Of Aberrations

Choi, Narak 01 January 2012 (has links)
This dissertation focuses on the scattering phenomena by well-polished optical mirror surfaces. Specifically, predicting image degradation by surface scatter from rough mirror surfaces for a two-mirror telescope operating at extremely short wavelengths (9nm~30nm) is performed. To evaluate image quality, surface scatter is predicted from the surface metrology data and the point spread function in the presence of both surface scatter and aberrations is calculated. For predicting the scattering intensity distribution, both numerical and analytic methods are considered. Among the numerous analytic methods, the small perturbation method (classical Rayleigh-Rice surface scatter theory), the Kirchhoff approximation method (classical BeckmanKirchhoff surface scatter theory), and the generalized Harvey-Shack surface scatter theory are adopted. As a numerical method, the integral equation method (method of moments) known as a rigorous solution is discussed. Since the numerical method is computationally too intensive to obtain the scattering prediction directly for the two mirror telescope, it is used for validating the three analytic approximate methods in special cases. In our numerical comparison work, among the three approximate methods, the generalized Harvey-Shack model shows excellent agreement to the rigorous solution and it is used to predict surface scattering from the mirror surfaces. Regarding image degradation due to surface scatter in the presence of aberrations, it is shown that the composite point spread function is obtained in explicit form in terms of convolutions of the geometrical point spread function and scaled bidirectional scattering distribution functions of the individual surfaces of the imaging system. The approximations and assumptions in this iv formulation are discussed. The result is compared to the irradiance distribution obtained using commercial non-sequential ray tracing software for the case of a two-mirror telescope operating at the extreme ultra-violet wavelengths and the two results are virtually identical. Finally, the image degradation due to the surface scatter from the mirror surfaces and the aberration of the telescope is evaluated in terms of the fractional ensquared energy (for different wavelengths and field angles) which is commonly used as an image quality requirement on many NASA astronomy programs.
9

Sub-diffraction limited imaging of plasmonic nanostructures

Titus, Eric James 24 October 2014 (has links)
This thesis is focused on understanding the interactions between molecules and surface-enhanced Raman scattering (SERS) substrates that are typically unresolved due to the diffraction limit of light. Towards this end, we have developed and tested several different sub-diffraction-limited imaging techniques in order to observe these interactions. First, we utilize an isotope-edited bianalyte approach combined with super-resolution imaging via Gaussian point-spread function fitting to elucidate the role of Raman reporter molecules on the location of the SERS emission centroids. By using low concentrations of two different analyte molecules, we find that the location of the SERS emission centroid depends on the number and positions of the molecules present on the SERS substrate. It is also known that SERS enhancement partially results from the molecule coupling its emission into the far-field through the plasmonic nanostructure. This results in a particle-dictated, dipole-like emission pattern, which cannot be accurately modeled as a Gaussian, so we tested the applicability of super-resolution imaging using a dipole-emission fitting model to this data. To test this model, we first fit gold nanorod (AuNR) luminescence images, as AuNR luminescence is primarily coupled out through the longitudinal dipole plasmon mode. This study showed that a three-dimensional dipole model is necessary to fit the AuNR emission, with the model providing accurate orientation and emission wavelength parameters for the nanostructure, as confirmed using correlated AFM and spectroscopy. The dipole fitting technique was next applied to single- and multiple-molecule SERS emission from silver nanoparticle dimers. We again found that a three-dimensional dipole PSF was necessary to accurately model the emission and orientation parameters of the dimer, but that at the single molecule level, the movement of the molecule causes increased uncertainty in the orientation parameters determined by the fit. Finally, we describe progress towards using a combined atomic force/optical microscope system in order to position a carbon nanotube analyte at known locations on the nanoparticle substrate. This would allow for the simultaneous mapping of nanoparticle topography and exact locations of plasmonic enhancement around the nanostructure, but consistently low signal-to-noise kept this technique from being viable. / text
10

GHOST IMAGE ANALYSIS FOR OPTICAL SYSTEMS

Abd El-Maksoud, Rania Hassan January 2009 (has links)
Ghost images are caused by the inter-reflections of light from optical surfaces that have transmittances less than unity. Ghosts can reduce contrast, provide misleading information, and if severe can veil parts of the nominal image. This dissertation develops several methodologies to simulate ghost effects arising from an even number of light reflections between the surfaces of multi-element lens systems. We present an algorithm to generate the ghost layout that is generated by two, four and up to N (even) reflections. For each possible ghost layout, paraxial ray tracing is performed to calculate the locations of the Gaussian cardinal points, the locations and diameters of the ghost entrance and exit pupils, the locations and diameters of the ghost entrance and exit windows, and the ghost chief and marginal ray heights and angles at each surface in the ghost layout. The paraxial ray trace data is used to estimate the fourth order ghost aberration coefficients. Petzval, tangential, and sagittal ghost image surfaces are introduced. Potential ghosts are formed at the intersection points between the ghost image surfaces and the Gaussian nominal image plane. Paraxial radiometric methodology is developed to estimate the ghost irradiance point spread function at the nominal image plane. Contrast reduction by ghosts can cause a reduction in the depth of field, and a simulation model and experimental technique that can be used to measure the depth of field is presented. Finally, ghost simulation examples are provided and discussed.

Page generated in 0.0943 seconds