• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 50
  • 50
  • 49
  • 14
  • 12
  • 10
  • 10
  • 9
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

GHOST IMAGE ANALYSIS FOR OPTICAL SYSTEMS

Abd El-Maksoud, Rania Hassan January 2009 (has links)
Ghost images are caused by the inter-reflections of light from optical surfaces that have transmittances less than unity. Ghosts can reduce contrast, provide misleading information, and if severe can veil parts of the nominal image. This dissertation develops several methodologies to simulate ghost effects arising from an even number of light reflections between the surfaces of multi-element lens systems. We present an algorithm to generate the ghost layout that is generated by two, four and up to N (even) reflections. For each possible ghost layout, paraxial ray tracing is performed to calculate the locations of the Gaussian cardinal points, the locations and diameters of the ghost entrance and exit pupils, the locations and diameters of the ghost entrance and exit windows, and the ghost chief and marginal ray heights and angles at each surface in the ghost layout. The paraxial ray trace data is used to estimate the fourth order ghost aberration coefficients. Petzval, tangential, and sagittal ghost image surfaces are introduced. Potential ghosts are formed at the intersection points between the ghost image surfaces and the Gaussian nominal image plane. Paraxial radiometric methodology is developed to estimate the ghost irradiance point spread function at the nominal image plane. Contrast reduction by ghosts can cause a reduction in the depth of field, and a simulation model and experimental technique that can be used to measure the depth of field is presented. Finally, ghost simulation examples are provided and discussed.
12

Employment of Crystallographic Image Processing Techniques to Scanning Probe Microscopy Images of Two-Dimensional Periodic Objects

Moon, Bill 01 January 2011 (has links)
Thin film arrays of molecules or supramolecules are active subjects of investigation because of their potential value in electronics, chemical sensing, catalysis, and other areas. Scanning probe microscopes (SPMs), including scanning tunneling microscopes (STMs) and atomic force microscopes (AFMs) are commonly used for the characterization and metrology of thin film arrays. As opposed to transmission electron microscopy (TEM), SPMs have the advantage that they can often make observations of thin films in air or liquid, while TEM requires highly specialized techniques if the sample is to be in anything but vacuum. SPM is a surface imaging technique, while TEM typically images a 2D projection of a thin 3D sample. Additionally, variants of SPM can make observations of more than just topography; for instance, magnetic force microscopy measures nanoscale magnetic properties. Thin film arrays are typically two-dimensionally periodic. A perfect, infinite two-dimensionally periodic array is mathematically constrained to belong to one of only 17 possible 2D plane symmetry groups. Any real image is both finite and imperfect. Crystallographic Image Processing (CIP) is an algorithm that Fourier transforms a real image into a 2D array of complex numbers, the Fourier coefficients of the image intensity, and then uses the relationship between those coefficients to first ascertain the 2D plane symmetry group that the imperfect, finite image is most likely to possess, and then adjust those coefficients that are symmetry-related so as to perfect the symmetry. A Fourier synthesis of the symmetrized coefficients leads to a perfectly symmetric image in direct space (when accumulated rounding and calculation errors are ignored). The technique is, thus, an averaging technique over the direct space experimental data that were selected from the thin film array. The image must have periodicity in two dimensions in order for this technique to be applicable. CIP has been developed over the past 40 years by the electron crystallography community, which works with 2D projections from 3D samples. Any periodic sample, whether it is 2D or 3D has an "ideal structure" which is the structure absent any crystal defects. The ideal structure can be considered one average unit cell, propagated by translation into the whole sample. The "real structure" is an actual sample containing vacancies, dislocations, and other defects. Typically the goal of electron and other types of microscopy is examination of the real structure, as the ideal structure of a crystal is already known from X-ray crystallography. High resolution transmission electron microscope image based electron crystallography, on the other hand, reveals the ideal crystal structure by crystallographic averaging. The ideal structure of a 2D thin film cannot be easily in a spatially selective fashion examined by grazing incidence X-ray or low energy electron diffraction based crystallography. SPMs straightforwardly observe thin films in direct space, but SPM accuracy is hampered by blunt or multiple tips and other unavoidable instrument errors. Especially since the film is often of a supramolecular system whose molecules are weakly bonded (via pi bonds, hydrogen bonds, etc.) both to the substrate and to each other, it is relatively easy for a molecule from the film to adhere to the scanning tip during the scan and become part of the tip during subsequent observation. If the thin film array has two-dimensional periodicity, CIP is a unique and effective tool both for image enhancement (determination of ideal structure) and for the quantification of overall instrument error. In addition, if a sample of known 2D periodicity is scanned, CIP can return information about the contribution of the instrument itself to the image. In this thesis we show how the technique is applied to images of two dimensionally periodic samples taken by SPMs. To the best of our knowledge, this has never been done before. Since 2D periodic thin film arrays have an ideal structure that is mathematically constrained to belong to one of the 17 plane symmetry groups, we can use CIP to determine that group and use it for a particularly effective averaging algorithm. We demonstrate that the use of this averaging algorithm removes noise and random error from images more effectively than translational averaging, also known as "lattice averaging" or "Fourier filtering". We also demonstrate the ability to correct systematic errors caused by hysteresis in the scanning process. These results have the effect of obtaining the ideal structure of the sample, averaging out the defects crystallographically, by providing an average unit cell which, when translated, represents the ideal structure. In addition, if one has recorded a scanning probe image of a 2D periodic sample of known symmetry, we demonstrate that it is possible to use the Fourier coefficients of the image transform to solve the inverse problem and calculate the point spread function (PSF) of the instrument. Any real scanning probe instrument departs from the ideal PSF of a Dirac delta function, and CIP allows us to quantify this departure as far as point symmetries are concerned. The result is a deconvolution of the "effective tip", which includes any blunt or multiple tip effects, as well as the effects caused by adhesion of a sample molecule to the scanning tip, or scanning irregularities unrelated to the physical tip. We also demonstrate that the PSF, once known, can be used on a second image taken by the same instrument under approximately the same experimental conditions to remove errors introduced during that second imaging process. The preponderance of two-dimensionally periodic samples as subjects of SPM observation makes the application of CIP to SPM images a valuable technique to extract a maximum amount of information from these images. The improved resolution of current SPMs creates images with more higher-order Fourier coefficients than earlier, "softer" images; these higher-order coefficients are especially amenable to CIP, which can then effectively magnify the resolution improvement created by better hardware. The improved resolution combined with the current interest in supramolecular structures (which although 3D usually start building on a 2D periodic surface) appears to provide an opportunity for CIP to significantly contribute to SPM image processing.
13

A time-dependent spectral point spread function for the OSIRIS optical spectrograph

2013 May 1900 (has links)
The primary goal of the recently formed Absorption Cross Sections of Ozone (ACSO) Commission is to establish an international standard for the ozone cross section used in the retrieval of atmospheric ozone number density profiles. The Canadian instrument OSIRIS onboard the Swedish spacecraft Odin has produced high quality ozone profiles since 2002, and as such the OSIRIS research team has been asked to contribute to the ACSO Commission by evaluating the impact of implementing different ozone cross sections into SASKTRAN, the radiative transfer model used in the retrieval of OSIRIS ozone profiles. Preliminary analysis revealed that the current state of the OSIRIS spectral point spread function, an array of values describing the dispersion of light within OSIRIS, would make such an evaluation difficult. Specifically, the current spectral point spread function is time-independent and therefore unable to account for any changes in the optics introduced by changes in the operational environment of the instrument. Such a situation introduces systematic errors when modelling the atmosphere as seen by OSIRIS, errors that impact the quality of the ozone number density profiles retrieved from OSIRIS measurements and make it difficult to accurately evaluate the impact of using different ozone cross sections within the SASKTRAN model. To eliminate these errors a method is developed to calculate, for the 310-350 nm wavelength range, a unique spectral point spread function for every scan in the OSIRIS mission history, the end result of which is a time-dependent spectral point spread function. The development of a modelling equation is then presented, which allows for any noise present in the time-dependent spectral point spread function to be reduced and relates the spectral point spread function to measured satellite parameters. Implementing this modelled time-dependent spectral point spread function into OSIRIS ozone retrieval algorithms is shown to improve all OSIRIS ozone profiles by 1-2% for tangent altitudes of 35-48 km. Analysis is also presented that reveals a previously unaccounted for temperature-dependent altitude shift in OSIRIS measurements. In conjunction with the use of the time-dependent spectral point spread function, accounting for this altitude shift is shown to result in an almost complete elimination of the temperature-induced systematic errors seen in OSIRIS ozone profiles. Such improvements lead to improved ozone number density profiles for all times of the OSIRIS mission and make it possible to evaluate the use of different ozone cross sections as requested by the ACSO Commission.
14

Computation of the Optical Point Spread Function of a Ball Lens

Lien, Chun-Yu 24 September 2012 (has links)
In this thesis, we analyze the simplest optical imaging system: a ball lens. The traditional method of using a geometric optics analysis on an optical system only gives the roughest qualitative solution due to the lack of consideration of wave properties. Therefore, for accurate quantitative results, we need to analyze said system with a complete wave theory approach. The reason that we chose a ball lens as the focus of this research is due to its spherical symmetry properties which allows us to rigorously investigate it with different analytic methods. We will apply geometric optics, Fourier optics, scalar wave optics, and electromagnetic optics methods to compute the point spread functions (PSF) of a ball lens under the assumption that the point source is isotropic. We will follow up by predicting the spot sizes that correspond to each mean. First, with geometric optics (GO), we apply the analytic ray tracing method to correlate the origins of light rays passing through the ball lens to their respective positions on the receptive end. We can then evaluate the energy distribution function by gathering the density of rays on image plane. Second, in the theory of Fourier optics (FO), to obtain the analytic formula of the point spread function, the integral kernel can be approximated as the Fresnel integral kernel by means of paraxial approximation. Compared to GO, the results from FO are superior due to the inclusion of wave characteristics. Furthermore, we consider scalar wave optics by directly solving the inhomogeneous Helmholtz equation which the scalar light field should satisfy. However, the light field is not assigned to an exact physical meaning in the theory of scalar wave optics, so we reasonably require boundary conditions where the light field function and its first derivative are continuous everywhere on the surface of ball lens. Finally, in the theory of electromagnetic optics (EMO), we consider the polarization of the point source, and the two kinds of Hertz vectors and , both of which satisfy inhomogeneous Helmholtz equation, and are derived from Maxwell¡¦s equations in spherical structures. In contrast with the scalar wave optics, the two Hertz vectors are defined concretely thus allowing us to assign exact boundary conditions on the interface. Then the fields corresponding to and are averaged as the final point spread function.
15

Construction and Applications of Two-photon Micro-spectroscopy

Wang, Yi-Ming 03 July 2001 (has links)
In this thesis the effects of single photon and multi-photon excitation on protoplasts from Arabidopsis thaliana are compared. Time-lapsed micro-spectroscopy at high spatial resolution is employed to study the response of chloroplasts within the protoplasts from Arabidopsis thaliana. We have found that the fluorescence spectra of chloroplasts exhibits dramatic changes and the protoplasts are rapidly damaged under multi-photon excitation as a result of pulsed laser illumination. In contrast, single photon excitation of chloroplasts with cw laser is relatively inert to the vitality of the protoplasts. In addition to, we have built an ultrafast laser excited cryogenic micro-spectroscopy setup to study the photoluminescence of PPV thin film. We found that the spectrum of PPV¡¦s photoluminescence should shift toward longer wavelength and the non-radiative transition should be suppressed as a result of longer electron coherence length at low temperature.
16

Implementing Efficient iterative 3D Deconvolution for Microscopy / Implementering av effektiv iterativ 3D-avfaltning för mikroskopi

Mehadi, Ahmed Shah January 2009 (has links)
Both Gauss-Seidel Iterative 3D deconvolution and Richardson-Lucy like algorithms are used due to their stability and high quality results in high noise microscopic medical image processing. An approach to determine the difference between these two algorithms is presented in this paper. It is shown that the convergence rate and the quality of these two algorithms are influenced by the size of the point spread function (PSF). Larger PSF sizes causes faster convergence but this effect falls off for larger sizes . It is furthermore shown that the relaxation factor and the number of iterations are influencing the convergence rate of the two algorithms. It has been found that increasing relaxation factor and number of iterations improve convergence and can reduce the error of the deblurred image. It also found that overrelaxation converges faster than underrelaxation for small number of iterations. However, it can be achieved smaller final error with under-relaxation. The choice of underrelaxation factor and overrelaxation factor value are highly problem specific and different from one type of images. In addition, when it comes to 3D iterative deconvolution, the influence of boundary conditions for these two algorithms is discussed. Implementation aspects are discussed and it is concluded that cache memory is vital for achieving a fast implementation of iterative 3D deconvolution. A mix of the two algorithms have been developed and compared with the previously mentioned Gauss-Seidel and the Richardson-Lucy-like algorithms. The experiments indicate that, if the value of the relaxation parameter is optimized, then the Richardson-Lucy-like algorithm has the best performance for 3D iterative deconvolution. / Upplösningen på bilder tagna med mikroskop är idag begränsad av diffraktion. För att komma runt detta förbättras bilden digitalt utifrån en matematisk modell av den fysiska processen. Den här avhandlingen jämför två algoritmer för att lösa ekvationerna: Richardson-Lucy och Gauss-Seidel. Vidare studeras effekten av parametrar såsom utbredningen av ljusspridfunktionen och regularisering av ekvationslösaren. / Mobile: (0046)762778136
17

Imaging, characterization and processing with axicon derivatives.

Saikaley, Andrew Grey 06 August 2013 (has links)
Axicons have been proposed for imaging applications since they offer the advantage of extended depth of field (DOF). This enhanced DOF comes at the cost of degraded image quality. Image processing has been proposed to improve the image quality. Initial efforts were focused on the use of an axicon in a borescope thereby extending depth of focus and eliminating the need for a focusing mechanism. Though promising, it is clear that image processing would lead to improved image quality. This would also eliminate the need, in certain applications, for a fiber optic imaging bundle as many modern day video borescopes use an imaging sensor coupled directly to the front end optics. In the present work, three types of refractive axicons are examined: a linear axicon, a logarithmic axicon and a Fresnel axicon. The linear axicon offers the advantage of simplicity and a significant amount of scientific literature including the application of image restoration techniques. The Fresnel axicon has the advantage of compactness and potential low cost of production. As no physical prior examples of the Fresnel axicons were available for experimentation until recently, very little literature exists. The logarithmic axicon has the advantage of nearly constant longitudinal intensity distribution and an aspheric design producing superior pre-processed images over the aforementioned elements. Point Spread Functions (PSFs) for each of these axicons have been measured. These PSFs form the basis for the design of digital image restoration filters. The performance of these three optical elements and a number of restoration techniques are demonstrated and compared.
18

Point spread function reconstruction for next generation adaptive optics systems

Keskin, Onur 01 February 2011 (has links)
In adaptive optics (AO) applications, point spread function (PSF) is defined as the impulse response of the system, and the PSF reconstruction is used in calibrating image analysis techniques for astrometry and in the deconvolution of images to enhance their contrast. The partial correction provided by the AO systems is due to the finite sampling of the wavefront sensor (WFS), the deformable mirror (DM) and the finite bandwidth of the overall system. This partial correction is mainly due to the high spatial frequencies introduced by the atmospheric turbulence, which translates into a halo artifact on the PSF. Furthermore, the correction provided by the AO system in direction of target objects degrades at greater angular distances from the guide star. This is called anisoplanatism. Consequently, the dimmer details of the AO images may not be detectable. One possible way to counteract this halo effect is through PSF reconstruction. In order to achieve accurate results, the analysis of the AO corrected images must account for the PSF temporal variation. The most promising and reliable technique to achieve PSF reconstruction is to use the wavefront sensor data measured synchronously with the observation (AO exposure). With the off-axis PSF reconstruction from a dual DM AO system as a general objective, a model based experimental evaluation of PSF reconstruction from classical AO systems has been performed. Building on the success from on-axis classical AO systems, the complexity of the model and the experimental set-up has been gradually increased to a multi DM AO system and a methodology has been proposed. The good agreements between the numerical and experimental evaluation of the reconstructed PSF comparisons ensured the successful implementation of the methodology. Last, the complexity of the analysis and of the model is further extended from a single light source to a multi-light source scheme, and the off-axis PSF reconstruction is achieved from a dual DM AO scheme in order to accommodate for the anisoplanatic errors. One of the challenges in interpreting PSF over wide fields arises from the temporal and field-dependent evolution of the adaptive optics PSF. The methodologies described in this thesis allow a quantitative analysis of wide-field observations that can account for these effects. The outcome of this research is important for post-processing of images obtained by next generation AO systems. Although the results are unique to the UVic experimental AO bench, the proposed PSF reconstruction methodologies will be applicable to other dual DM systems and to multi DM AO systems. More precisely, the importance of this thesis is to offer a PSF reconstruction technique for the adaptive optics instruments for the Thirty Meter Telescope (TMT). Once operational in 2016, TMT will be the first extremely large ground based optical telescope. It will have a primary mirror diameter of 30 m.
19

Microscopy - Point Spread Function, Focus, Resolution

NÁHLÍK, Tomáš January 2015 (has links)
The aim of this thesis was to design new algorithms for processing image data from microscopes and demonstration of the possibilities of their use on standard samples (latex particles of different diameter). Results were used for the analysis of real objects inside the living mammalian cell. For the design of these algorithms was necessary to first understand how the image in the microscope is build, including a variety of lens aberrations. It was necessary to start with simulations of ideal case displaying one point (simulation PSF). Images of Airy discs in the plane of focus, or simulations using the ENZ theory. Available ENZ simulations provide only a few sections of different focal planes. It was necessary to adjust them to a usable form for generating a full 3D view. Using these algorithms, it was examined the behavior of the basic lens aberrations, and the behavior of two particles (objects) at different distances from each other. At the conclusion of these observations, it was necessary to redefine the terms Focus and resolution. Furthermore, the definitions have been introduced for discriminability and distinguishability of objects in an image. Thanks to the new definitions and new viewing (information entropy) to challenge the discriminability/distinguishability problem of objects in the image was possible to design and develop algorithms for image processing that enable to detect objects below the Abbe resolution condition using standard optical bright field microscopy. It has been found experimentally that the limiting factor for resolution using this method is the size and resolution of the camera chip. When using a chip with a higher density of points, we can achieve better results (detection of smaller objects) using the same algorithms.
20

Euclid weak lensing : PSF field estimation / Estimation du champ de PSF pour l’effet de lentille gravitationnelle faible avec Euclid

Schmitz, Morgan A. 22 October 2019 (has links)
Le chemin parcouru par la lumière, lors de sa propagation dans l’Univers, est altéré par la présence d’objets massifs. Cela entraine une déformation des images de galaxies lointaines. La mesure de cet effet, dit de lentille gravitationnelle faible, nous permet de sonder la structure, aux grandes échelles, de notre Univers. En particulier, nous pouvons ainsi étudier la distribution de la matière noire et les propriétés de l’Energie Sombre, proposée comme origine de l’accélération de l’expansion de l’Univers. L’étude de l'effet de lentille gravitationnelle faible constitue l’un des objectifs scientifiques principaux d'Euclid, un télescope spatial de l’Agence Spatiale Européenne en cours de construction.En pratique, ce signal est obtenu en mesurant la forme des galaxies. Toute image produite par un instrument optique est altérée par sa fonction d’étalement du point (PSF). Celle-ci a diverses origines : diffraction, imperfections dans les composantes optiques de l’instrument, effets atmosphériques (pour les télescopes au sol)… Puisque la PSF affecte aussi les formes des galaxies, il est crucial de la prendre en compte lorsque l’on étudie l’effet de lentille gravitationnelle faible, ce qui nécessite de très bien connaître la PSF elle-même.Celle-ci varie en fonction de la position dans le plan focal. Une mesure de la PSF, à certaines positions, est donnée par l’observation d’étoiles non-résolues dans le champ, à partir desquelles on peut construire un modèle de PSF. Dans le cas d’Euclid, ces images d’étoiles seront sous-échantillonnée ; aussi le modèle de PSF devra-t-il contenir une étape de super-résolution. En raison de la très large bande d’intégration de l’imageur visible d’Euclid, il sera également nécessaire de capturer les variations en longueur d’onde de la PSF.La contribution principale de cette thèse consiste en le développement de méthodes novatrices d’estimation de la PSF, reposant sur plusieurs outils : la notion de représentation parcimonieuse, et le transport optimal numérique. Ce dernier nous permet de proposer la première méthode capable de fournir un modèle polychromatique de la PSF, construit uniquement à partir d’images sous-échantillonnées d’étoiles et leur spectre. Une étude de la propagation des erreurs de PSF sur la mesure de forme de galaxies est également proposée. / As light propagates through the Universe, its path is altered by the presence of massive objects. This causes a distortion of the images of distant galaxies. Measuring this effect, called weak gravitational lensing, allows us to probe the large scale structure of the Universe. This makes it a powerful source of cosmological insight, and can in particular be used to study the distribution of dark matter and the nature of Dark Energy. The European Space Agency’s upcoming Euclid mission is a spaceborne telescope with weak lensing as one of its primary science objectives.In practice, the weak lensing signal is recovered from the measurement of the shapes of galaxies. The images obtained by any optical instrument are altered by its Point Spread Function (PSF), caused by various effects: diffraction, imperfect optics, atmospheric turbulence (for ground-based telescopes)… Since the PSF also alters galaxy shapes, it is crucial to correct for it when performing weak lensing measurements. This, in turn, requires precise knowledge of the PSF itself.The PSF varies depending on the position of objects within the instrument’s focal plane. Unresolved stars in the field provide a measurement of the PSF at given positions, from which a PSF model can be built. In the case of Euclid, star images will suffer from undersampling. The PSF model will thus need to perform a super-resolution step. In addition, because of the very wide band of its visible instrument, variations of the PSF with the wavelength of incoming light will also need to be accounted for.The main contribution of this thesis is the building of novel PSF modelling approaches. These rely on sparsity and numerical optimal transport. The latter enables us to propose the first method capable of building a polychromatic PSF model, using no information other than undersampled star images, their position and spectra. We also study the propagation of errors in the PSF to the measurement of galaxy shapes.

Page generated in 0.0567 seconds