• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 11
  • 1
  • Tagged with
  • 45
  • 45
  • 19
  • 16
  • 15
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Interferometric reflectance microscopy for physical and chemical characterization of biological nanoparticles

Yurdakul, Celalettin 27 September 2021 (has links)
Biological nanoparticles have enormous utility as well as potential adverse impacts in biotechnology, human health, and medicine. The physical and chemical properties of these nanoparticles have strong implications on their distribution, circulation, and clearance in vivo. Accurate morphological visualization and chemical characterization of nanoparticles by label-free (direct) optical microscopy would provide valuable insights into their natural and intrinsic properties. However, three major challenges related to label-free nanoparticle imaging must be overcome: (i) weak contrast due to exceptionally small size and low-refractive-index difference with the surrounding medium, (ii) inadequate spatial resolution to discern nanoscale features, and (iii) lack of chemical specificity. Advances in common-path interferometric microscopy have successfully overcome the weak contrast limitation and enabled direct detection of low-index biological nanoparticles down to single proteins. However, interferometric light microscopy does not overcome the diffraction limit, and studying the nanoparticle morphology at sub-wavelength spatial resolution remains a significant challenge. Moreover, chemical signature and composition are inaccessible in these interferometric optical measurements. This dissertation explores innovations in common-path interferometric microscopy to provide enhanced spatial resolution and chemical specificity in high-throughput imaging of individual nanoparticles. The dissertation research effort focuses on a particular modality of interferometric imaging, termed “single-particle interferometric reflectance (SPIR) microscopy”, that uses an oxide-coated silicon substrate for enhanced coherent detection of the weakly scattered light. We seek to advance three specific aspects of SPIR microscopy: sensitivity, spatial resolution, and chemical specificity. The first one is to enhance particle visibility via novel optical and computational methods that push optical detection sensitivity. The second one is to improve the lateral resolution beyond the system's classical limit by a new computational imaging method with an engineered illumination function that accesses high-resolution spatial information at the nanoscale. The last one is to extract a distinctive chemical signature by probing the mid-infrared absorption-induced photothermal effect. To realize these goals, we introduce new theoretical models and experimental concepts. This dissertation makes the following four major contributions in the wide-field common-path interferometric microscopy field: (1) formulating vectorial-optics based linear forward model that describes interferometric light scattering near planar interfaces in the quasi-static limit, (2) developing computationally efficient image reconstruction methods from defocus images to detect a single 25 nm dielectric nanoparticle, (3) developing asymmetric illumination based computational microscopy methods to achieve direct morphological visualization of nanoparticles at 150 nm, and (4) developing bond-selective interferometric microscopy to enable multispectral chemical imaging of sub-wavelength nanoparticles in the vibrational fingerprint region. Collectively, through these research projects, we demonstrate significant advancement in the wide-field common-path interferometric microscopy field to achieve high-resolution and accurate visualization and chemical characterization of a broad size range of individual biological nanoparticles with high sensitivity.
32

Computational Wavefront Sensing: Theory, Practice, and Applications

Wang, Congli 06 1900 (has links)
Wavefront sensing is a fundamental problem in applied optics. Wavefront sensors that work in a deterministic manner are of particular interest. Initialized with a unified theory for classical wavefront sensors, this dissertation discusses relevant properties of wavefront sensor designs. Based on which, a new wavefront sensor, termed Coded Wavefront Sensor, is proposed to leverage the advantages of the analysis, especially the lateral wavefront resolution. A prototype was built to demonstrate this new wavefront sensor. Given that, two specific applications are demonstrated: megapixel adaptive optics and simultaneous intensity and phase imaging. Combined with a spatial light modulator, a hardware deconvolution approach is demonstrated for computational cameras via a high resolution adaptive optics system. By simply switching the normal image sensor with the proposed one, as well as slight change of illumination, a bright field microscope can be configured to a simultaneous intensity and phase microscope. These show the broad application range of the proposed computational wavefront sensing approach. Lastly, this dissertation proposes the idea of differentiable optics for wavefront engineering and lens metrology. By making use of automatic differentiation, a physically-correct differentiable ray tracing engine is built, with its potentials being illustrated via several challenging applications in optical design and metrology.
33

Edge-resolved non-line-of-sight imaging

Seidel, Sheila W. 17 January 2023 (has links)
Over the past decade, the possibility of forming images of objects hidden from line-of-sight (LOS) view has emerged as an intriguing and potentially important expansion of computational imaging and computer vision technology. This capability could help soldiers anticipate danger in a tunnel system, autonomous vehicles avoid collision, and first responders safely traverse a building. In many scenarios where non-line-of-sight (NLOS) vision is desired, the LOS view is obstructed by a wall with a vertical edge. In this thesis we show that through modeling and computation, the impediment to LOS itself can be exploited for enhanced resolution of the hidden scene. NLOS methods may be active, where controlled illumination of the hidden scene is used, or passive, relying only on already present light sources. In both active and passive NLOS imaging, measured light returns to the sensor after multiple diffuse bounces. Each bounce scatters light in all directions, eliminating directional information. When the scene is hidden behind a wall with a vertical edge, that edge occludes light as a function of its incident azimuthal angle around the edge. Measurements acquired on the floor adjacent to the occluding edge thus contain rich azimuthal information about the hidden scene. In this thesis, we explore several edge-resolved NLOS imaging systems that exploit the occlusion provided by a vertical edge. In addition to demonstrating novel edge-resolved NLOS imaging systems with real experimental data, this thesis includes modeling, performance bound analyses, and inversion algorithms for the proposed systems. We first explore the use of a single vertical edge to form a 1D (in azimuthal angle) reconstruction of the hidden scene. Prior work demonstrated that temporal variation in a video of the floor may be used to image moving components of the hidden scene. In contrast, our algorithm reconstructs both moving and stationary hidden scenery from a single photograph, without assuming uniform floor albedo. We derive a forward model that describes the measured photograph as a nonlinear combination of the unknown floor albedo and the light from behind the wall. The inverse problem, which is the joint estimation of floor albedo and a 1D reconstruction of the hidden scene, is solved via optimization, where we introduce regularizers that help separate light variations in the measured photograph due to floor pattern and hidden scene, respectively. Next, we combine the resolving power of a vertical edge with information from the relationship between intensity and radial distance to form 2D reconstructions from a single passive photograph. We derive a new forward model, accounting for radial falloff, and propose two inversion algorithms to form 2D reconstructions from a single photograph of the penumbra. The performances of both algorithms are demonstrated on experimental data corresponding to several different hidden scene configurations. A Cramer-Rao bound analysis further demonstrates the feasibility and limitations of this 2D corner camera. Our doorway camera exploits the occlusion provided by the two vertical edges of a doorway for more robust 2D reconstruction of the hidden scene. This work provides and demonstrates a novel inversion algorithm to jointly estimate two views of change in the hidden scene, using the temporal difference between photographs acquired on the visible side of the doorway. A Cramer-Rao bound analysis is used to demonstrate the 2D resolving power of the doorway camera over other passive acquisition strategies and to motivate the novel biangular reconstruction grid. Lastly, we present the active corner camera. Most existing active NLOS methods illuminate the hidden scene using a pulsed laser directed at a relay surface and collect time-resolved measurements of returning light. The prevailing approaches are inherently limited by the need for laser scanning, a process that is generally too slow to image hidden objects in motion. Methods that avoid laser scanning track the moving parts of the hidden scene as one or two point targets. In this work, based on more complete optical response modeling yet still without multiple illumination positions, we demonstrate accurate reconstructions of objects in motion and a `map’ of the stationary scenery behind them. This new ability to count, localize, and characterize the sizes of hidden objects in motion, combined with mapping of the stationary hidden scene could greatly improve indoor situational awareness in a variety of applications.
34

Augmenting label-free imaging modalities with deep learning based digital staining

Cheng, Shiyi 30 August 2023 (has links)
Label-free imaging modalities offer numerous advantages, such as the ability to avoid the time-consuming and potentially disruptive process of physical staining. However, one challenge that arises in label-free imaging is the limited ability to extract specific structural or molecular information from the acquired images. To overcome this limitation, a novel approach known as digital staining or digital labeling has emerged. Digital staining leverages the power of deep learning algorithms to virtually introduce labels or stains into label-free images, thereby enabling the extraction of detailed information that would typically require physical staining. The integration of digital staining with label-free imaging holds great promise in expanding the capabilities of imaging techniques, facilitating improved analysis, and advancing our understanding of biological systems at both the cellular and tissue level. In this thesis, I explore supervised and semi-supervised methodologies of digital staining and the applications in augmenting label-free imaging modalities, particularly in the context of cell imaging and brain imaging. In the first part of the thesis, I demonstrate the novel integration of multi-contrast dark-field reflectance microscopy and supervised deep learning to enable subcellular immunofluorescence labeling and cell cytometry from label-free imaging. By leveraging the rich structural information and sensitivity of reflectance microscopy, this method accurately predicts subcellular features without the need for physical staining. As a result of the use of a novel multi-contrast modality, the digital labeling approach demonstrates significant improvements over the state-of-the-art techniques, achieving up to 3× prediction accuracy. In addition to fluorescence prediction, the method successfully reproduces single-cell level structural phenotypes related to cell cycles. The multiplexed readouts obtained through digital labeling enable accurate multi-parametric single-cell profiling across a large cell population. In the second part, I investigated a novel digital staining optical coherence tomography (DS-OCT) modality combining advantages of serial sectioning OCT and semi-supervised deep learning and demonstrated several advantages for the application of 3D histological brain imaging. The DS model is trained using a semi-supervised learning framework that incorporates unpaired translation, a biophysical model, and cross-modality image registration, which manifests broad applicability to other weakly-paired bioimaging modalities. The DS model enables the translation of S-OCT images to Gallyas silver staining, providing consistent staining quality across different samples. I further show that DS enhances contrast across cortical layer boundaries and enables reliable cortical layer differentiation. Additionally, DS-OCT preserves 3D-geometry on centimeter-scale brain tissue blocks. My pilot study demonstrates promising results on other anatomical regions acquired from different S-OCT systems, highlighting its potential for generalization in various imaging contexts. Overall, I investigate the problems of augmenting label-free imaging modalities with deep learning generated digital stains. I explored both supervised and semi-supervised methods for building novel DS frameworks. My work showcased two important applications in the field of immunofluorescence cell imaging and 3D histological brain imaging. On the one hand, the integration of DS techniques with multi-contrast microscopy has the potential to enhance the throughput of single-cell imaging cytometry, and phenotyping. On the other hand, integrating DS techniques with S-OCT holds great potential for high-throughput human brain imaging, enabling comprehensive studies on the structure and function of the brain. Through the exploration, I aim to shed light on the impact of digital staining in the field of computational imaging and its implications for various scientific disciplines.
35

Coded Measurement for Imaging and Spectroscopy

Portnoy, Andrew David January 2009 (has links)
<p>This thesis describes three computational optical systems and their underlying coding strategies. These codes are useful in a variety of optical imaging and spectroscopic applications. Two multichannel cameras are described. They both use a lenslet array to generate multiple copies of a scene on the detector. Digital processing combines the measured data into a single image. The visible system uses focal plane coding, and the long wave infrared (LWIR) system uses shift coding. With proper calibration, the multichannel interpolation results recover contrast for targets at frequencies beyond the aliasing limit of the individual subimages. This thesis also describes a LWIR imaging system that simultaneously measures four wavelength channels each with narrow bandwidth. In this system, lenses, aperture masks, and dispersive optics implement a spatially varying spectral code.</p> / Dissertation
36

Digital Phase Correction of a Partially Coherent Sparse Aperture System

Krug, Sarah Elaine 27 August 2015 (has links)
No description available.
37

Kernel Estimation Approaches to Blind Deconvolution

Yash Sanghvi (18387693) 19 April 2024 (has links)
<p dir="ltr">The past two decades have seen photography shift from the hands of professionals to that of the average smartphone user. However, fitting a camera module in the palm of your hand has come with its own cost. The reduced sensor size, and hence the smaller pixels, has made the image inherently noisier due to fewer photons being captured. To compensate for fewer photons, we can increase the exposure of the camera but this may exaggerate the effect of hand shake, making the image blurrier. The presence of both noise and blur has made the post-processing algorithms necessary to produce a clean and sharp image. </p><p dir="ltr">In this thesis, we discuss various methods of deblurring images in the presence of noise. Specifically, we address the problem of photon-limited deconvolution, both with and without the underlying blur kernel being known i.e. non-blind and blind deconvolution respectively. For the problem of blind deconvolution, we discuss the flaws of the conventional approach of joint estimation of the image and blur kernel. This approach, despite its drawbacks, has been the go-to method for solving blind deconvolution for decades. We then discuss the relatively unexplored kernel-first approach to solving the problem which is numerically stable than the alternating minimization counterpart. We show how to implement this framework using deep neural networks in practice for both photon-limited and noiseless deconvolution problems. </p>
38

HIGH SPEED IMAGING VIA ADVANCED MODELING

Soumendu Majee (10942896) 04 August 2021 (has links)
<div>There is an increasing need to accurately image objects at a high temporal resolution for different applications in order to analyze the underlying physical, chemical, or biological processes. In this thesis, we use advanced models exploiting the image structure and the measurement process in order to achieve an improved temporal resolution. The thesis is divided into three chapters, each corresponding to a different imaging application.</div><div><br></div><div>In the first chapter, we propose a novel method to localize neurons in fluorescence microscopy images. Accurate localization of neurons enables us to scan only the neuron locations instead of the full brain volume and thus improve the temporal resolution of neuron activity monitoring. We formulate the neuron localization problem as an inverse problem where we reconstruct an image that encodes the location of the neuron centers. The sparsity of the neuron centers serves as a prior model, while the forward model comprises of shape models estimated from training data.</div><div><br></div><div>In the second chapter, we introduce multi-slice fusion, a novel framework to incorporate advanced prior models for inverse problems spanning many dimensions such as 4D computed tomography (CT) reconstruction. State of the art 4D reconstruction methods use model based iterative reconstruction (MBIR), but it depends critically on the quality of the prior modeling. Incorporating deep convolutional neural networks (CNNs) in the 4D reconstruction problem is difficult due to computational difficulties and lack of high-dimensional training data. Multi-Slice Fusion integrates the tomographic forward model with multiple low dimensional CNN denoisers along different planes to produce a 4D regularized reconstruction. The improved regularization in multi-slice fusion allows each time-frame to be reconstructed from fewer measurements, resulting in an improved temporal resolution in the reconstruction. Experimental results on sparse-view and limited-angle CT data demonstrate that Multi-Slice Fusion can substantially improve the quality of reconstructions relative to traditional methods, while also being practical to implement and train.</div><div><br></div><div>In the final chapter, we introduce CodEx, a synergistic combination of coded acquisition and a non-convex Bayesian reconstruction for improving acquisition speed in computed tomography (CT). In an ideal ``step-and-shoot'' tomographic acquisition, the object is rotated to each desired angle, and the view is taken. However, step-and-shoot acquisition is slow and can waste photons, so in practice the object typically rotates continuously in time, leading to views that are blurry. This blur can then result in reconstructions with severe motion artifacts. CodEx works by encoding the acquisition with a known binary code that the reconstruction algorithm then inverts. The CodEx reconstruction method uses the alternating direction method of multipliers (ADMM) to split the inverse problem into iterative deblurring and reconstruction sub-problems, making reconstruction practical. CodEx allows for a fast data acquisition leading to a good temporal resolution in the reconstruction.</div>
39

TIME-OF-FLIGHT NEUTRON CT FOR ISOTOPE DENSITY RECONSTRUCTION AND CONE-BEAM CT SEPARABLE MODELS

Thilo Balke (15348532) 26 April 2023 (has links)
<p>There is a great need for accurate image reconstruction in the context of non-destructive evaluation. Major challenges include the ever-increasing necessity for high resolution reconstruction with limited scan and reconstruction time and thus fewer and noisier measurements. In this thesis, we leverage advanced Bayesian modeling of the physical measurement process and probabilistic prior information of the image distribution in order to yield higher image quality despite limited measurement time. We demonstrate in several ways efficient computational performance through the exploitation of more efficient memory access, optimized parametrization of the system model, and multi-pixel parallelization. We demonstrate that by building high-fidelity forward models that we can generate quantitatively reliable reconstructions despite very limited measurement data.</p> <p><br></p> <p>In the first chapter, we introduce an algorithm for estimating isotopic densities from neutron time-of-flight imaging data. Energy resolved neutron imaging (ERNI) is an advanced neutron radiography technique capable of non-destructively extracting spatial isotopic information within a given material. Energy-dependent radiography image sequences can be created by utilizing neutron time-of-flight techniques. In combination with uniquely characteristic isotopic neutron cross-section spectra, isotopic areal densities can be determined on a per-pixel basis, thus resulting in a set of areal density images for each isotope present in the sample. By preforming ERNI measurements over several rotational views, an isotope decomposed 3D computed tomography is possible. We demonstrate a method involving a robust and automated background estimation based on a linear programming formulation. The extremely high noise due to low count measurements is overcome using a sparse coding approach. It allows for a significant computation time improvement, from weeks to a few hours compared to existing neutron evaluation tools, enabling at the present stage a semi-quantitative, user-friendly routine application. </p> <p><br></p> <p>In the second chapter, we introduce the TRINIDI algorithm, a more refined algorithm for the same problem.</p> <p>Accurate reconstruction of 2D and 3D isotope densities is a desired capability with great potential impact in applications such as evaluation and development of next-generation nuclear fuels.</p> <p>Neutron time-of-flight (TOF) resonance imaging offers a potential approach by exploiting the characteristic neutron adsorption spectra of each isotope.</p> <p>However, it is a major challenge to compute quantitatively accurate images due to a variety of confounding effects such as severe Poisson noise, background scatter, beam non-uniformity, absorption non-linearity, and extended source pulse duration. We present the TRINIDI algorithm which is based on a two-step process in which we first estimate the neutron flux and background counts, and then reconstruct the areal densities of each isotope and pixel.</p> <p>Both components are based on the inversion of a forward model that accounts for the highly non-linear absorption, energy-dependent emission profile, and Poisson noise, while also modeling the substantial spatio-temporal variation of the background and flux. </p> <p>To do this, we formulate the non-linear inverse problem as two optimization problems that are solved in sequence.</p> <p>We demonstrate on both synthetic and measured data that TRINIDI can reconstruct quantitatively accurate 2D views of isotopic areal density that can then be reconstructed into quantitatively accurate 3D volumes of isotopic volumetric density.</p> <p><br></p> <p>In the third chapter, we introduce a separable forward model for cone-beam computed tomography (CT) that enables efficient computation of a Bayesian model-based reconstruction. Cone-beam CT is an attractive tool for many kinds of non-destructive evaluation (NDE). Model-based iterative reconstruction (MBIR) has been shown to improve reconstruction quality and reduce scan time. However, the computational burden and storage of the system matrix is challenging. In this paper we present a separable representation of the system matrix that can be completely stored in memory and accessed cache-efficiently. This is done by quantizing the voxel position for one of the separable subproblems. A parallelized algorithm, which we refer to as zipline update, is presented that speeds up the computation of the solution by about 50 to 100 times on 20 cores by updating groups of voxels together. The quality of the reconstruction and algorithmic scalability are demonstrated on real cone-beam CT data from an NDE application. We show that the reconstruction can be done from a sparse set of projection views while reducing artifacts visible in the conventional filtered back projection (FBP) reconstruction. We present qualitative results using a Markov Random Field (MRF) prior and a Plug-and-Play denoiser.</p>
40

[pt] ESTUDO E IMPLEMENTAÇÃO DE UMA CÂMERA DE PIXEL ÚNICO POR MEIO DE SENSORIAMENTO COMPRESSIVO / [en] STUDY AND IMPLEMENTATION OF A SINGLE PIXEL CAMERA BY COMPRESSIVE SAMPLING

MATHEUS ESTEVES FERREIRA 15 June 2021 (has links)
[pt] Câmeras de pixel único consistem em reconstruir computacionalmente imagens em duas dimensões a partir de um conjunto de medidas feitas por um detector de único pixel. Para que se obtenha a informação espacial, um conjunto de padrões de modulação são aplicados à luz transmitida/refletida do objeto e essa informação é combinada com o sinal integral do detector. Primeiro, apresentamos uma visão geral desses sistemas e demonstramos a implementação de uma prova de conceito capaz de fazer aquisição de imagem usando três modos de operação: Varredura, escaneamento por base de Hadamard, e escaneamento por base de Hadamard com sensoriamento compreensivo. Segundo, discutimos como os diferentes parâmetros experimentais do sistema ótico afetam a aquisição. Finalmente, comparamos a performance dos três modos de operação quando usados para a aquisição de images com tamanhos entre (8px, 8px) e (128px, 128px). / [en] Single-pixel imaging consists in computationally reconstructing 2-dimensional images from a set of intensity measurements taken by a singlepoint detector. To derive the spatial information of a scene, a set of modulation patterns are applied to the transmitted/backscattered light from the object and combined with the integral signal on the detector. First, we present an overview of such optical systems and implement a proof of concept that can perform image acquisition using three different modes of operation: Raster scanning, Hadamard basis scanning, and Hadamard compressive sampling. Second, we explore how the different experimental parameters affect image acquisition. Finally, we compare how the three scanning mode perform for acquisition of images of sizes ranging from (8px, 8px) to (128px, 128px).

Page generated in 0.105 seconds