Spelling suggestions: "subject:"computational amaging"" "subject:"computational damaging""
31 |
Computer vision at low lightAbhiram Gnanasambandam (12863432) 14 June 2022 (has links)
<p>Imaging in low light is difficult because the number of photons arriving at the image sensor is low. This is a major technological challenge for applications such as surveillance and autonomous driving. Conventional CMOS image sensors (CIS) circumvent this issue by using techniques such as burst photography. However, this process is slow and it does not solve the underlying problem that the CIS cannot efficiently capture the signals arriving at the sensors. This dissertation focuses on solving this problem using a combination of better image sensors (Quanta Image Sensors) and computational imaging techniques.</p>
<p><br></p>
<p>The first part of the thesis involves understanding how the quanta image sensors work and how they can be used to solve the low light imaging problem. The second part is about the algorithms that can deal with images obtained in low light. The contributions in this part include – 1. Understanding and proposing solutions for the Poisson noise model, 2. Proposing a new machine learning scheme called student-teacher learning for helping neural networks deal with noise, and 3. Developing solutions that work not only for low light but also for a wide range of signal and noise levels. Using the ideas, we can solve a variety of applications in low light, such as color imaging, dynamic scene reconstruction, deblurring, and object detection.</p>
|
32 |
Co-conception des systemes optiques avec masques de phase pour l'augmentation de la profondeur du champ : evaluation du performance et contribution de la super-résolution / Co-design of optical systems with phase masks for depth of field extension : performance evaluation and contribution of superresolutionFalcon Maimone, Rafael 19 October 2017 (has links)
Les masques de phase sont des dispositifs réfractifs situés généralement au niveau de la pupille d’un système optique pour en modifier la réponse impulsionnelle (PSF en anglais), par une technique habituellement connue sous le nom de codage de front d’onde. Ces masques peuvent être utilisés pour augmenter la profondeur du champ (DoF en anglais) des systèmes d’imagerie sans diminuer la quantité de lumière qui entre dans le système, en produisant une PSF ayant une plus grande invariance à la défocalisation. Cependant, plus le DoF est grand plus l’image acquise est floue et une opération de déconvolution doit alors lui être appliquée. Par conséquent, la conception des masques de phase doit prendre en compte ce traitement pour atteindre le compromis optimal entre invariance de la PSF à la défocalisation et qualité de la déconvolution.. Cette approche de conception conjointe a été introduite par Cathey et Dowski en 1995 et affinée en 2002 pour des masques de phase continus puis généralisée par Robinson et Stork en 2007 pour la correction d’autres aberrations optiques.Dans cette thèse sont abordés les différents aspects de l’optimisation des masques de phase pour l’augmentation du DoF, tels que les critères de performance et la relation entre ces critères et les paramètres des masques. On utilise la « qualité d’image » (IQ en anglais), une méthode basée sur l’écart quadratique moyen définie par Diaz et al., pour la co-conception des divers masques de phase et pour évaluer leur performance. Nous évaluons ensuite la pertinence de ce critère IQ en comparaison d’autres métriques de conception optique, comme par exemple le rapport de Strehl ou la fonction de transfert de modulation (MTF en anglais). Nous nous concentrons en particulier sur les masques de phase annulaires binaires, l’étude de leur performance pour différents cas comme l’augmentation du DoF, la présence d’aberrations ou l’impact du nombre de paramètres d’optimisation.Nous appliquons ensuite les outils d’analyse exploités pour les masques binaires aux masques de phase continus qui apparaissent communément dans la littérature, comme les masques de phase polynomiaux. Nous avons comparé de manière approfondie ces masques entre eux et aux masques binaires, non seulement pour évaluer leurs avantages, mais aussi parce qu’en analysant leurs différences il est possible de comprendre leurs propriétésLes masques de phase fonctionnent comme des filtres passe-bas sur des systèmes limités par la diffraction, réduisant en pratique les phénomènes de repliement spectral. D’un autre côté, la technique de reconstruction connue sous l’appellation de « superresolution » utilise des images d’une même scène perturbées par du repliement de spectre pour augmenter la résolution du système optique original. Les travaux réalisés durant une période de détachement chez le partenaire industriel de la thèse, KLA-Tencor à Louvain, Belgique, illustrent le propos. A la fin du manuscrit nous étudions la pertinence de la combinaison de cette technique avec l’utilisation de masques de phase pour l’augmentation du DoF. / Phase masks are wavefront encoding devices typically situated at the aperture stop of an optical system to engineer its point spread function (PSF) in a technique commonly known as wavefront coding. These masks can be used to extend the depth of field (DoF) of imaging systems without reducing the light throughput by producing a PSF that becomes more invariant to defocus; however, the larger the DoF the more blurred the acquired raw image so that deconvolution has to be applied on the captured images. Thus, the design of the phase masks has to take into account image processing in order to reach the optimal compromise between invariance of PSF to defocus and capacity to deconvolve the image. This joint design approach has been introduced by Cathey and Dowski in 1995 and refined in 2002 for continuous-phase DoF enhancing masks and generalized by Robinson and Stork in 2007 to correct other optical aberrations.In this thesis we study the different aspects of phase mask optimization for DoF extension, such as the different performance criteria and the relation of these criteria with the different mask parameters. We use the so-called image quality (IQ), a mean-square error based criterion defined by Diaz et al., to co-design different phase masks and evaluate their performance. We then compare the relevance of the IQ criterion against other optical design metrics, such as the Strehl ratio, the modulation transfer function (MTF) and others. We focus in particular on the binary annular phase masks, their performance for various conditions, such as the desired DoF range, the number of optimization parameters, presence of aberrations and others.We use then the analysis tools used for the binary phase masks for continuous-phase masks that appear commonly in the literature, such as the polynomial-phase masks. We extensively compare these masks to each other and the binary masks, not only to assess their benefits, but also because by analyzing their differences we can understand their properties.Phase masks function as a low-pass filter on diffraction limited systems, effectively reducing aliasing. On the other hand, the signal processing technique known as superresolution uses several aliased frames of the same scene to enhance the resolution of the final image beyond the sampling resolution of the original optical system. Practical examples come from the works made during a secondment with the industrial partner KLA-Tencor in Leuven, Belgium. At the end of the manuscript we study the relevance of using such a technique alongside phase masks for DoF extension.
|
33 |
Image Restoration Methods for Imaging through Atmospheric TurbulenceZhiyuan Mao (15209827) 12 April 2023 (has links)
<p> The performance of long-range imaging systems often suffers due to the presence of atmospheric turbulence. One way to alleviate the degradation caused by atmospheric turbulence is to apply post-processing mitigation algorithms, where a high-quality frame is reconstructed from a single degraded image or a sequence of degraded frames. The image processing algorithms for atmospheric turbulence mitigation have been studied for decades, yet some critical problems remain open.</p>
<p><br></p>
<p>This dissertation addresses the problem of image reconstruction through atmospheric turbulence from three unique perspectives: 1) Reconstruction with the presence of moving objects using an improved classical image processing pipeline. 2) A fast simulation scheme for efficiently generating large-scale turbulence-degraded datasets for training deep neural networks. 3) A deep learning-based single-frame reconstruction method using Vision Transformer. </p>
|
34 |
Interferometric reflectance microscopy for physical and chemical characterization of biological nanoparticlesYurdakul, Celalettin 27 September 2021 (has links)
Biological nanoparticles have enormous utility as well as potential adverse impacts in biotechnology, human health, and medicine. The physical and chemical properties of these nanoparticles have strong implications on their distribution, circulation, and clearance in vivo. Accurate morphological visualization and chemical characterization of nanoparticles by label-free (direct) optical microscopy would provide valuable insights into their natural and intrinsic properties. However, three major challenges related to label-free nanoparticle imaging must be overcome: (i) weak contrast due to exceptionally small size and low-refractive-index difference with the surrounding medium, (ii) inadequate spatial resolution to discern nanoscale features, and (iii) lack of chemical specificity. Advances in common-path interferometric microscopy have successfully overcome the weak contrast limitation and enabled direct detection of low-index biological nanoparticles down to single proteins. However, interferometric light microscopy does not overcome the diffraction limit, and studying the nanoparticle morphology at sub-wavelength spatial resolution remains a significant challenge. Moreover, chemical signature and composition are inaccessible in these interferometric optical measurements. This dissertation explores innovations in common-path interferometric microscopy to provide enhanced spatial resolution and chemical specificity in high-throughput imaging of individual nanoparticles.
The dissertation research effort focuses on a particular modality of interferometric imaging, termed “single-particle interferometric reflectance (SPIR) microscopy”, that uses an oxide-coated silicon substrate for enhanced coherent detection of the weakly scattered light. We seek to advance three specific aspects of SPIR microscopy: sensitivity, spatial resolution, and chemical specificity. The first one is to enhance particle visibility via novel optical and computational methods that push optical detection sensitivity. The second one is to improve the lateral resolution beyond the system's classical limit by a new computational imaging method with an engineered illumination function that accesses high-resolution spatial information at the nanoscale. The last one is to extract a distinctive chemical signature by probing the mid-infrared absorption-induced photothermal effect. To realize these goals, we introduce new theoretical models and experimental concepts.
This dissertation makes the following four major contributions in the wide-field common-path interferometric microscopy field: (1) formulating vectorial-optics based linear forward model that describes interferometric light scattering near planar interfaces in the quasi-static limit, (2) developing computationally efficient image reconstruction methods from defocus images to detect a single 25 nm dielectric nanoparticle, (3) developing asymmetric illumination based computational microscopy methods to achieve direct morphological visualization of nanoparticles at 150 nm, and (4) developing bond-selective interferometric microscopy to enable multispectral chemical imaging of sub-wavelength nanoparticles in the vibrational fingerprint region. Collectively, through these research projects, we demonstrate significant advancement in the wide-field common-path interferometric microscopy field to achieve high-resolution and accurate visualization and chemical characterization of a broad size range of individual biological nanoparticles with high sensitivity.
|
35 |
Computational Wavefront Sensing: Theory, Practice, and ApplicationsWang, Congli 06 1900 (has links)
Wavefront sensing is a fundamental problem in applied optics. Wavefront sensors that work in a deterministic manner are of particular interest. Initialized with a unified theory for classical wavefront sensors, this dissertation discusses relevant properties of wavefront sensor designs. Based on which, a new wavefront sensor, termed Coded Wavefront Sensor, is proposed to leverage the advantages of the analysis, especially the lateral wavefront resolution. A prototype was built to demonstrate this new wavefront sensor.
Given that, two specific applications are demonstrated: megapixel adaptive optics and simultaneous intensity and phase imaging. Combined with a spatial light modulator, a hardware deconvolution approach is demonstrated for computational cameras via a high resolution adaptive optics system. By simply switching the normal image sensor with the proposed one, as well as slight change of illumination, a bright field microscope can be configured to a simultaneous intensity and phase microscope. These show the broad application range of the proposed computational wavefront sensing approach.
Lastly, this dissertation proposes the idea of differentiable optics for wavefront engineering and lens metrology. By making use of automatic differentiation, a physically-correct differentiable ray tracing engine is built, with its potentials being illustrated via several challenging applications in optical design and metrology.
|
36 |
Edge-resolved non-line-of-sight imagingSeidel, Sheila W. 17 January 2023 (has links)
Over the past decade, the possibility of forming images of objects hidden from line-of-sight (LOS) view has emerged as an intriguing and potentially important expansion of computational imaging and computer vision technology. This capability could help soldiers anticipate danger in a tunnel system, autonomous vehicles avoid collision, and first responders safely traverse a building. In many scenarios where non-line-of-sight (NLOS) vision is desired, the LOS view is obstructed by a wall with a vertical edge. In this thesis we show that through modeling and computation, the impediment to LOS itself can be exploited for enhanced resolution of the hidden scene.
NLOS methods may be active, where controlled illumination of the hidden scene is used, or passive, relying only on already present light sources. In both active and passive NLOS imaging, measured light returns to the sensor after multiple diffuse bounces. Each bounce scatters light in all directions, eliminating directional information. When the scene is hidden behind a wall with a vertical edge, that edge occludes light as a function of its incident azimuthal angle around the edge. Measurements acquired on the floor adjacent to the occluding edge thus contain rich azimuthal information about the hidden scene. In this thesis, we explore several edge-resolved NLOS imaging systems that exploit the occlusion provided by a vertical edge. In addition to demonstrating novel edge-resolved NLOS imaging systems with real experimental data, this thesis includes modeling, performance bound analyses, and inversion algorithms for the proposed systems.
We first explore the use of a single vertical edge to form a 1D (in azimuthal angle) reconstruction of the hidden scene. Prior work demonstrated that temporal variation in a video of the floor may be used to image moving components of the hidden scene. In contrast, our algorithm reconstructs both moving and stationary hidden scenery from a single photograph, without assuming uniform floor albedo. We derive a forward model that describes the measured photograph as a nonlinear combination of the unknown floor albedo and the light from behind the wall. The inverse problem, which is the joint estimation of floor albedo and a 1D reconstruction of the hidden scene, is solved via optimization, where we introduce regularizers that help separate light variations in the measured photograph due to floor pattern and hidden scene, respectively.
Next, we combine the resolving power of a vertical edge with information from the relationship between intensity and radial distance to form 2D reconstructions from a single passive photograph. We derive a new forward model, accounting for radial falloff, and propose two inversion algorithms to form 2D reconstructions from a single photograph of the penumbra. The performances of both algorithms are demonstrated on experimental data corresponding to several different hidden scene configurations. A Cramer-Rao bound analysis further demonstrates the feasibility and limitations of this 2D corner camera.
Our doorway camera exploits the occlusion provided by the two vertical edges of a doorway for more robust 2D reconstruction of the hidden scene. This work provides and demonstrates a novel inversion algorithm to jointly estimate two views of change in the hidden scene, using the temporal difference between photographs acquired on the visible side of the doorway. A Cramer-Rao bound analysis is used to demonstrate the 2D resolving power of the doorway camera over other passive acquisition strategies and to motivate the novel biangular reconstruction grid.
Lastly, we present the active corner camera. Most existing active NLOS methods illuminate the hidden scene using a pulsed laser directed at a relay surface and collect time-resolved measurements of returning light. The prevailing approaches are inherently limited by the need for laser scanning, a process that is generally too slow to image hidden objects in motion. Methods that avoid laser scanning track the moving parts of the hidden scene as one or two point targets. In this work, based on more complete optical response modeling yet still without multiple illumination positions, we demonstrate accurate reconstructions of objects in motion and a `map’ of the stationary scenery behind them. This new ability to count, localize, and characterize the sizes of hidden objects in motion, combined with mapping of the stationary hidden scene could greatly improve indoor situational awareness in a variety of applications.
|
37 |
Augmenting label-free imaging modalities with deep learning based digital stainingCheng, Shiyi 30 August 2023 (has links)
Label-free imaging modalities offer numerous advantages, such as the ability to avoid the time-consuming and potentially disruptive process of physical staining. However, one challenge that arises in label-free imaging is the limited ability to extract specific structural or molecular information from the acquired images. To overcome this limitation, a novel approach known as digital staining or digital labeling has emerged. Digital staining leverages the power of deep learning algorithms to virtually introduce labels or stains into label-free images, thereby enabling the extraction of detailed information that would typically require physical staining. The integration of digital staining with label-free imaging holds great promise in expanding the capabilities of imaging techniques, facilitating improved analysis, and advancing our understanding of biological systems at both the cellular and tissue level. In this thesis, I explore supervised and semi-supervised methodologies of digital staining and the applications in augmenting label-free imaging modalities, particularly in the context of cell imaging and brain imaging.
In the first part of the thesis, I demonstrate the novel integration of multi-contrast dark-field reflectance microscopy and supervised deep learning to enable subcellular immunofluorescence labeling and cell cytometry from label-free imaging. By leveraging the rich structural information and sensitivity of reflectance microscopy, this method accurately predicts subcellular features without the need for physical staining. As a result of the use of a novel multi-contrast modality, the digital labeling approach demonstrates significant improvements over the state-of-the-art techniques, achieving up to 3× prediction accuracy. In addition to fluorescence prediction, the method successfully reproduces single-cell level structural phenotypes related to cell cycles. The multiplexed readouts obtained through digital labeling enable accurate multi-parametric single-cell profiling across a large cell population.
In the second part, I investigated a novel digital staining optical coherence tomography (DS-OCT) modality combining advantages of serial sectioning OCT and semi-supervised deep learning and demonstrated several advantages for the application of 3D histological brain imaging. The DS model is trained using a semi-supervised learning framework that incorporates unpaired translation, a biophysical model, and cross-modality image registration, which manifests broad applicability to other weakly-paired bioimaging modalities. The DS model enables the translation of S-OCT images to Gallyas silver staining, providing consistent staining quality across different samples. I further show that DS enhances contrast across cortical layer boundaries and enables reliable cortical layer differentiation. Additionally, DS-OCT preserves 3D-geometry on centimeter-scale brain tissue blocks. My pilot study demonstrates promising results on other anatomical regions acquired from different S-OCT systems, highlighting its potential for generalization in various imaging contexts.
Overall, I investigate the problems of augmenting label-free imaging modalities with deep learning generated digital stains. I explored both supervised and semi-supervised methods for building novel DS frameworks. My work showcased two important applications in the field of immunofluorescence cell imaging and 3D histological brain imaging. On the one hand, the integration of DS techniques with multi-contrast microscopy has the potential to enhance the throughput of single-cell imaging cytometry, and phenotyping. On the other hand, integrating DS techniques with S-OCT holds great potential for high-throughput human brain imaging, enabling comprehensive studies on the structure and function of the brain. Through the exploration, I aim to shed light on the impact of digital staining in the field of computational imaging and its implications for various scientific disciplines.
|
38 |
Coded Measurement for Imaging and SpectroscopyPortnoy, Andrew David January 2009 (has links)
<p>This thesis describes three computational optical systems and their underlying coding strategies. These codes are useful in a variety of optical imaging and spectroscopic applications. Two multichannel cameras are described. They both use a lenslet array to generate multiple copies of a scene on the detector. Digital processing combines the measured data into a single image. The visible system uses focal plane coding, and the long wave infrared (LWIR) system uses shift coding. With proper calibration, the multichannel interpolation results recover contrast for targets at frequencies beyond the aliasing limit of the individual subimages. This thesis also describes a LWIR imaging system that simultaneously measures four wavelength channels each with narrow bandwidth. In this system, lenses, aperture masks, and dispersive optics implement a spatially varying spectral code.</p> / Dissertation
|
39 |
Digital Phase Correction of a Partially Coherent Sparse Aperture SystemKrug, Sarah Elaine 27 August 2015 (has links)
No description available.
|
40 |
Kernel Estimation Approaches to Blind DeconvolutionYash Sanghvi (18387693) 19 April 2024 (has links)
<p dir="ltr">The past two decades have seen photography shift from the hands of professionals to that of the average smartphone user. However, fitting a camera module in the palm of your hand has come with its own cost. The reduced sensor size, and hence the smaller pixels, has made the image inherently noisier due to fewer photons being captured. To compensate for fewer photons, we can increase the exposure of the camera but this may exaggerate the effect of hand shake, making the image blurrier. The presence of both noise and blur has made the post-processing algorithms necessary to produce a clean and sharp image. </p><p dir="ltr">In this thesis, we discuss various methods of deblurring images in the presence of noise. Specifically, we address the problem of photon-limited deconvolution, both with and without the underlying blur kernel being known i.e. non-blind and blind deconvolution respectively. For the problem of blind deconvolution, we discuss the flaws of the conventional approach of joint estimation of the image and blur kernel. This approach, despite its drawbacks, has been the go-to method for solving blind deconvolution for decades. We then discuss the relatively unexplored kernel-first approach to solving the problem which is numerically stable than the alternating minimization counterpart. We show how to implement this framework using deep neural networks in practice for both photon-limited and noiseless deconvolution problems. </p>
|
Page generated in 0.1213 seconds