1 |
Feature-Specific Imaging: Extensions to Adaptive Object Recognition and Active Illumination Based Scene ReconstructionBaheti, Pawan Kumar January 2008 (has links)
Computational imaging (CI) systems are hybrid imagers in which the optical and post-processing sub-systems are jointly optimized to maximize the task-specific performance. In this dissertation we consider a form of CI system that measures the linear projections (i.e., features) of the scene optically, and it is commonly referred to as feature-specific imaging (FSI). Most of the previous work on FSI has been concerned with image reconstruction. Previous FSI techniques have also been non-adaptive and restricted to the use of ambient illumination.We consider two novel extensions of the FSI system in this work. We first present an adaptive feature-specific imaging (AFSI) system and consider its application to a face-recognition task. The proposed system makes use of previous measurements to adapt the projection basis at each step. We present both statistical and information-theoretic adaptation mechanisms for the AFSI system. The sequential hypothesis testing framework is used to determine the number of measurements required for achieving a specified misclassification probability. We demonstrate that AFSI system requires significantly fewer measurements than static-FSI (SFSI) and conventional imaging at low signal-to-noise ratio (SNR). We also show a trade-off, in terms of average detection time, between measurement SNR and adaptation advantage. Experimental results validating the AFSI system are presented.Next we present a FSI system based on the use of structured light. Feature measurements are obtained by projecting spatially structured illumination onto an object and collecting all of the reflected light onto a single photodetector. We refer to this system as feature-specific structured imaging (FSSI). Principal component features are used to define the illumination patterns. The optimal LMMSE operator is used to generate object estimates from the measurements. We demonstrate that this new imaging approach reduces imager complexity and provides improved image quality in high noise environments. We then generalize the FSSI system by making use of random projections (i.e., using no object prior) to define the illumination patterns. Object estimates are generated using L1-norm minimization and gradient-projection sparse reconstruction algorithms. Experimental results validating the FSSI system are presented.
|
2 |
Applications of Non-Traditional Measurements for Computational ImagingTreeaporn, Vicha, Treeaporn, Vicha January 2017 (has links)
Imaging systems play an important role in many diverse applications. Requirements for these applications, however, can lead to complex or sub-optimal designs. Traditionally, imaging systems are designed to yield a visually pleasing representation, or "pretty picture", of the scene or object. Often this is because a human operator is viewing the acquired image to perform a specific task. With digital computers increasingly being used for automation, a large number of algorithms have been designed to accept as input a pretty picture. This isomorphic representation however is neither necessary nor optimal for tasks such as data compression, transmission, pattern recognition or classification. This disconnect between optical measurement and post processing for the final system outcome has motivated an interest in computational imaging (CI). In a CI system the optical sub-system and post-processing sub-system is jointly designed to optimize system performance for a specific task. In these hybrid imagers, the measured image may no longer be a pretty picture but rather an intermediate non-traditional measurement. In this work, applications of non-traditional measurements are considered for computational imaging. Two systems for an image reconstruction task are studied and one system for a detection task is investigated. First, a CI system to extend the field of view is analyzed and an experimental prototype demonstrated. This prototype validates the simulation study and is designed to yield a 3x field of view improvement relative to a conventional imager. Second, a CI system to acquire time-varying natural scenes, i.e. video, is developed. A candidate system using 8x8x16 spatiotemporal blocks yields about 292x compression compared to a conventional imager. Candidate electro-optical architectures, including charge-domain processing, to implement this approach are also discussed. Lastly, a CI system with x-ray pencil beam illumination is investigated for a detection task where system performance is quantified using an information-theoretic metric.
|
3 |
Computational Imaging and Its Applications in FluidsXiong, Jinhui 13 September 2021 (has links)
Computational imaging di↵ers from traditional imaging system by integrating
an encoded measurement system and a tailored computational algorithm to extract
interesting scene features. This dissertation demonstrates two approaches which apply
computational imaging methods to the fluid domain.
In the first approach, we study the problem of reconstructing time-varying 3D-
3C fluid velocity vector fields. We extend 2D Particle Imaging Velocimetry to three
dimensions by encoding depth into color (a “rainbow”). For reconstruction, we derive
an image formation model for recovering stationary 3D particle positions. 3D velocity
estimation is achieved with a variant of 3D optical flow that accounts for both physical
constraints as well as the rainbow image formation model. This velocity field can be
used to refine the position estimate by adding physical priors that tie together all the
time steps, forming a joint reconstruction scheme.
In the second approach, we study the problem of reconstructing the 3D shape of
underwater environments. The distortions from the moving water surface provide a
changing parallax for each point on the underwater surface. We utilize this observation
by jointly estimating both the underwater geometry and the dynamic shape
of the water surface. To this end, we propose a novel di↵erentiable framework to tie
together all parameters in an integrated image formation model. To our knowledge,
this is the first solution that is capable to simultaneously retrieve the structure of
dynamic water surfaces and static underwater scene geometry in the wild.
|
4 |
Coded Shack-Hartmann Wavefront SensorWang, Congli 12 1900 (has links)
Wavefront sensing is an old yet fundamental problem in adaptive optics. Traditional wavefront sensors are limited to time-consuming measurements, complicated and
expensive setup, or low theoretically achievable resolution.
In this thesis, we introduce an optically encoded and computationally decodable
novel approach to the wavefront sensing problem: the Coded Shack-Hartmann. Our
proposed Coded Shack-Hartmann wavefront sensor is inexpensive, easy to fabricate
and calibrate, highly sensitive, accurate, and with high resolution. Most importantly,
using simple optical flow tracking combined with phase smoothness prior, with the
help of modern optimization technique, the computational part is split, efficient, and
parallelized, hence real time performance has been achieved on Graphics Processing
Unit (GPU), with high accuracy as well. This is validated by experimental results.
We also show how optical flow intensity consistency term can be derived, using
rigor scalar diffraction theory with proper approximation. This is the true physical law
behind our model. Based on this insight, Coded Shack-Hartmann can be interpreted
as an illumination post-modulated wavefront sensor. This offers a new theoretical
approach for wavefront sensor design.
|
5 |
Computed tomography imaging system design for shape threat detectionMasoudi, Ahmad, Thamvichai, Ratchaneekorn, Neifeld, Mark A. 08 December 2016 (has links)
In the first part of this work, we present two methods for improving the shape-threat detection performance of x-ray computed tomography. Our work uses a fixed-gantry system employing 25 x-ray sources. We first utilize Kullback-Leibler divergence and Mahalanobis distance to determine the optimal single-source single-exposure measurement. The second method employs gradient search on Bhattacharyya bound on error rate (P-e) to determine an optimal multiplexed measurement that simultaneously utilizes all available sources in a single exposure. With limited total resources of 10(6) photons, the multiplexed measurement provides a 41.8x reduction in P-e relative to the single-source measurement. In the second part, we consider multiple exposures and develop an adaptive measurement strategy for x-ray threat detection. Using the adaptive strategy, we design the next measurement based on information retrieved from previous measurements. We determine both optimal "next measurement" and stopping criterion to insure a target P-e using sequential hypothesis testing framework. With adaptive single-source measurements, we can reduce P-e by a factor of 40x relative to the measurements employing all sources in sequence. We also observe that there is a trade-off between measurement SNR and number of detectors when we study the performance of systems with reduced detector numbers. (C) 2016 Society of Photo-Optical Instrumentation Engineers (SPIE)
|
6 |
Volumetric imaging across spatiotemporal scales in biology with fluorescence microscopySims, Ruth Rebecca January 2019 (has links)
Quantitative three dimensional maps of cellular structure, activity and function provide the key to answering many prevalent questions in modern biological research. Fluorescence microscopy has emerged as an indispensable tool in generating such maps, but common techniques are limited by fundamental physical constraints which render them incapable of simultaneously achieving high spatial and temporal resolution. This thesis will describe the development of novel microscopy techniques and complementary computational tools capable of addressing some of the aforementioned limitations of fluorescence microscopy and further outline their application in providing novel biological insights. The first section details the design of a light sheet microscope capable of high-throughput imaging of cleared, macroscopic samples with cellular resolution. In light sheet microscopy, the combination of spatially confined illumination with widefield detection enables multi-megapixel acquisition in a single camera exposure. The corresponding increase in acquisition speed enables systems level biological studies to be performed. The ability of this microscope to perform rapid, high-resolution imaging of intact samples is demonstrated by its application in a project which established a niche and hierarchy for stem cells in the adult nervous system. Light sheet microscopy achieves fast volumetric imaging rates, but the two dimensional nature of each measurement results in an inevitable lag between acquisition of the initial and final planes. The second section of this thesis describes the development and optimization of a light field microscope which captures volumetric information in a snapshot. Light field microscopy is a computational technique and images are reconstructed from raw data. Both the fidelity of computed volumes and the efficiency of the algorithms are strongly dependent on the quality of the rectification. A highly accurate, automated procedure is presented in this section. Light field reconstruction techniques are investigated and compared and the results are used to inform the re-design of the microscope. The new optical configuration is demonstrated to minimize the long-object problem. In the final section of the thesis, the spatial resolution limits of light field microscopy are explored using a combination of simulations and experiments. It is shown that light field microscopy is capable of localizing point sources over a large depth of field with high axial and lateral precision. Notably, this work paves the way towards frame rate limited super resolution localization microscopy with a depth of field larger than the thickness of a typical mammalian cell.
|
7 |
Active Illumination for the RealWorldAchar, Supreeth 01 July 2017 (has links)
Active illumination systems use a controllable light source and a light sensor to measure properties of a scene. For such a system to work reliably across a wide range of environments it must be able to handle the effects of global light transport, bright ambient light, interference from other active illumination devices, defocus, and scene motion. The goal of this thesis is to develop computational techniques and hardware arrangements to make active illumination devices based on commodity-grade components that work under real world conditions. We aim to combine the robustness of a scanning laser rangefinder with the speed, measurement density, compactness, and economy of a consumer depth camera. Towards this end, we have made four contributions. The first is a computational technique for compensating for the effects of motion while separating the direct and global components of illumination. The second is a method that combines triangulation and depth from illumination defocus cues to increase the working range of a projector-camera system. The third is a new active illumination device that can efficiently image the epipolar component of light transport between a source and sensor. The device can measure depth using active stereo or structured light and is robust to many global light transport effects. Most importantly, it works outdoors in bright sunlight despite using a low power source. Finally, we extend the proposed epipolar-only imaging technique to time-of-flight sensing and build a low-power sensor that is robust to sunlight, global illumination, multi-device interference, and camera shake. We believe that the algorithms and sensors proposed and developed in this thesis could find applications in a diverse set of fields including mobile robotics, medical imaging, gesture recognition, and agriculture.
|
8 |
Occluder-aided non-line-of-sight imagingSaunders, Charles 27 September 2021 (has links)
Non-line-of-sight (NLOS) imaging is the inference of the properties of objects or scenes outside of the direct line-of-sight of the observer. Such inferences can range from a 2D photograph-like image of a hidden area, to determining the position, motion or number of hidden objects, to 3D reconstructions of a hidden volume. NLOS imaging has many enticing potential applications, such as leveraging the existing hardware in many automobiles to identify hidden pedestrians, vehicles or other hazards and hence plan safer trajectories. Other potential application areas include improving navigation for robots or drones by anticipating occluded hazards, peering past obstructions in medical settings, or in surveying unreachable areas in search-and-rescue operations. Most modern NLOS imaging methods fall into one of two categories: active imaging methods that have some control of the illumination of the hidden area, and passive
methods that simply measure light that already exists. This thesis introduces two NLOS imaging methods, one of each category, along with modeling and data processing techniques that are more broadly applicable. The methods are linked by their use of objects (‘occluders’) that reside somewhere between the observer and the hidden
scene and block some possible light paths.
Computational periscopy, a passive method, can recover the unknown position of an occluding object in the hidden area and then recover an image of the hidden scene behind it. It does so using only a single photograph of a blank relay wall taken by an ordinary digital camera. We develop also a framework using an optimized preconditioning matrix to improve the speed at which these reconstructions can be made and greatly improve the robustness to ambient light. Lastly, we develop tools necessary to demonstrate recovery of scenes at multiple unknown depths – paving the way towards three-dimensional reconstructions.
Edge-resolved transient imaging, an active method, enables the formation of 2.5D representations – a plan view plus heights – of large-scale scenes. A pulsed laser illuminates spots along a small semi-circle on the floor, centered on the edge of a vertical wall such as in a doorway. The wall edge occludes some light paths, only allowing the laser light reflecting off of the floor to illuminate certain portions of the hidden area beyond the wall, depending on where along the semi-circle it is illuminating. The time at which photons return following a laser pulse is recorded. The occluding wall edge provides angular resolution, and time-resolved sensing provides radial resolution. This novel acquisition strategy, along with a scene response model and reconstruction algorithm, allow for 180° field of view reconstructions of large-scale scenes unlike other active imaging methods.
Lastly, we introduce a sparsity penalty named mutually exclusive group sparsity (MEGS), that can be used as a constraint or regularization in optimization problems to promote solutions in which certain components are mutually exclusive. We explore how this penalty relates to other similar penalties, develop fast algorithms to solve MEGS-regularized problems, and demonstrate how enforcing mutual exclusivity structure can provide great utility in NLOS imaging problems.
|
9 |
Directional photodetectors based on plasmonic metasurfaces for advanced imaging capabilitiesLiu, Jianing 24 May 2024 (has links)
With the continuous advancement of imaging technologies, imaging devices are no longer limited to the exclusive measurement of optical intensity (at the expense of all other degrees of freedom of the incident light) in a standard single-aperture configuration. Increasingly demanding applications are currently driving the exploration of more complex imaging capabilities, such as phase contrast imaging, wave front sensing, optical spatial filtering, and compound-eye vision. Many of these applications also require highly integrated, lightweight, and compact designs without sacrificing performance. Thanks to recent developments in micro- and nanophotonics, planar devices such as metasurfaces have emerged as a powerful new paradigm to construct optical elements with extreme miniaturization and high design flexibility. Sophisticated simulation tools and high-resolution fabrication techniques have also become available to enable the implementation of these compact subwavelength structures in academic and industrial labs. In this dissertation, I will present my work aimed at achieving directional light sensing by directly integrating composite plasmonic metasurfaces on the illumination windows of standard planar photodetectors. The devices developed in this work feature sharp detection peaks in their angular response with three different types of behaviors: symmetric around the device surface normal, asymmetric with nearly linear angular variations around normal incidence, and geometrically tunable single peaks up to over 60 degrees. The performance of the proposed metasurfaces has been optimized by full-wave numerical simulations, and experimental devices have been fabricated and tested with a custom-designed measurement setup. The measured angular characteristics were then used to computationally demonstrate incoherent edge enhancement for computer vision and quantitative phase-contrast imaging for biomedical microscopy. Importantly, the device fabrication process has also been upgraded to wafer scale, further promoting the possibility of batch-production of our devices.
|
10 |
<b>Single Shot Exposure Bracketing for High-Dynamic Range Imaging using a Multifunctional Metasurface</b>Charles Thomas Brookshire (18396522) 17 April 2024 (has links)
<p dir="ltr">We propose a hardware driven solution to high dynamic range (HDR) imaging in the form of a single metasurface lens. Our design consists of a metasurface capable of forming nine low dynamic range (LDR) sub-images of varying intensities scaling by a factor of 2 onto an imaging sensor. After synthetically verifying the functionality of our design, the metasurface is fabricated and a prototype system is constructed for real world experiments. Utilizing the experimental system, the compatibility of our extracted LDR sub- images with pre-existing exposure bracketing solutions for multi-image HDR fusion is demonstrated. The resulting HDR images are highly robust to scene motion due to the instantaneous capture of multi-exposure LDR sub-images allowing for HDR video capabilities.</p>
|
Page generated in 0.1115 seconds