• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 11
  • 1
  • Tagged with
  • 44
  • 44
  • 19
  • 16
  • 15
  • 9
  • 9
  • 9
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Feature-Specific Imaging: Extensions to Adaptive Object Recognition and Active Illumination Based Scene Reconstruction

Baheti, Pawan Kumar January 2008 (has links)
Computational imaging (CI) systems are hybrid imagers in which the optical and post-processing sub-systems are jointly optimized to maximize the task-specific performance. In this dissertation we consider a form of CI system that measures the linear projections (i.e., features) of the scene optically, and it is commonly referred to as feature-specific imaging (FSI). Most of the previous work on FSI has been concerned with image reconstruction. Previous FSI techniques have also been non-adaptive and restricted to the use of ambient illumination.We consider two novel extensions of the FSI system in this work. We first present an adaptive feature-specific imaging (AFSI) system and consider its application to a face-recognition task. The proposed system makes use of previous measurements to adapt the projection basis at each step. We present both statistical and information-theoretic adaptation mechanisms for the AFSI system. The sequential hypothesis testing framework is used to determine the number of measurements required for achieving a specified misclassification probability. We demonstrate that AFSI system requires significantly fewer measurements than static-FSI (SFSI) and conventional imaging at low signal-to-noise ratio (SNR). We also show a trade-off, in terms of average detection time, between measurement SNR and adaptation advantage. Experimental results validating the AFSI system are presented.Next we present a FSI system based on the use of structured light. Feature measurements are obtained by projecting spatially structured illumination onto an object and collecting all of the reflected light onto a single photodetector. We refer to this system as feature-specific structured imaging (FSSI). Principal component features are used to define the illumination patterns. The optimal LMMSE operator is used to generate object estimates from the measurements. We demonstrate that this new imaging approach reduces imager complexity and provides improved image quality in high noise environments. We then generalize the FSSI system by making use of random projections (i.e., using no object prior) to define the illumination patterns. Object estimates are generated using L1-norm minimization and gradient-projection sparse reconstruction algorithms. Experimental results validating the FSSI system are presented.
2

Applications of Non-Traditional Measurements for Computational Imaging

Treeaporn, Vicha, Treeaporn, Vicha January 2017 (has links)
Imaging systems play an important role in many diverse applications. Requirements for these applications, however, can lead to complex or sub-optimal designs. Traditionally, imaging systems are designed to yield a visually pleasing representation, or "pretty picture", of the scene or object. Often this is because a human operator is viewing the acquired image to perform a specific task. With digital computers increasingly being used for automation, a large number of algorithms have been designed to accept as input a pretty picture. This isomorphic representation however is neither necessary nor optimal for tasks such as data compression, transmission, pattern recognition or classification. This disconnect between optical measurement and post processing for the final system outcome has motivated an interest in computational imaging (CI). In a CI system the optical sub-system and post-processing sub-system is jointly designed to optimize system performance for a specific task. In these hybrid imagers, the measured image may no longer be a pretty picture but rather an intermediate non-traditional measurement. In this work, applications of non-traditional measurements are considered for computational imaging. Two systems for an image reconstruction task are studied and one system for a detection task is investigated. First, a CI system to extend the field of view is analyzed and an experimental prototype demonstrated. This prototype validates the simulation study and is designed to yield a 3x field of view improvement relative to a conventional imager. Second, a CI system to acquire time-varying natural scenes, i.e. video, is developed. A candidate system using 8x8x16 spatiotemporal blocks yields about 292x compression compared to a conventional imager. Candidate electro-optical architectures, including charge-domain processing, to implement this approach are also discussed. Lastly, a CI system with x-ray pencil beam illumination is investigated for a detection task where system performance is quantified using an information-theoretic metric.
3

Computational Imaging and Its Applications in Fluids

Xiong, Jinhui 13 September 2021 (has links)
Computational imaging di↵ers from traditional imaging system by integrating an encoded measurement system and a tailored computational algorithm to extract interesting scene features. This dissertation demonstrates two approaches which apply computational imaging methods to the fluid domain. In the first approach, we study the problem of reconstructing time-varying 3D- 3C fluid velocity vector fields. We extend 2D Particle Imaging Velocimetry to three dimensions by encoding depth into color (a “rainbow”). For reconstruction, we derive an image formation model for recovering stationary 3D particle positions. 3D velocity estimation is achieved with a variant of 3D optical flow that accounts for both physical constraints as well as the rainbow image formation model. This velocity field can be used to refine the position estimate by adding physical priors that tie together all the time steps, forming a joint reconstruction scheme. In the second approach, we study the problem of reconstructing the 3D shape of underwater environments. The distortions from the moving water surface provide a changing parallax for each point on the underwater surface. We utilize this observation by jointly estimating both the underwater geometry and the dynamic shape of the water surface. To this end, we propose a novel di↵erentiable framework to tie together all parameters in an integrated image formation model. To our knowledge, this is the first solution that is capable to simultaneously retrieve the structure of dynamic water surfaces and static underwater scene geometry in the wild.
4

Coded Shack-Hartmann Wavefront Sensor

Wang, Congli 12 1900 (has links)
Wavefront sensing is an old yet fundamental problem in adaptive optics. Traditional wavefront sensors are limited to time-consuming measurements, complicated and expensive setup, or low theoretically achievable resolution. In this thesis, we introduce an optically encoded and computationally decodable novel approach to the wavefront sensing problem: the Coded Shack-Hartmann. Our proposed Coded Shack-Hartmann wavefront sensor is inexpensive, easy to fabricate and calibrate, highly sensitive, accurate, and with high resolution. Most importantly, using simple optical flow tracking combined with phase smoothness prior, with the help of modern optimization technique, the computational part is split, efficient, and parallelized, hence real time performance has been achieved on Graphics Processing Unit (GPU), with high accuracy as well. This is validated by experimental results. We also show how optical flow intensity consistency term can be derived, using rigor scalar diffraction theory with proper approximation. This is the true physical law behind our model. Based on this insight, Coded Shack-Hartmann can be interpreted as an illumination post-modulated wavefront sensor. This offers a new theoretical approach for wavefront sensor design.
5

Computed tomography imaging system design for shape threat detection

Masoudi, Ahmad, Thamvichai, Ratchaneekorn, Neifeld, Mark A. 08 December 2016 (has links)
In the first part of this work, we present two methods for improving the shape-threat detection performance of x-ray computed tomography. Our work uses a fixed-gantry system employing 25 x-ray sources. We first utilize Kullback-Leibler divergence and Mahalanobis distance to determine the optimal single-source single-exposure measurement. The second method employs gradient search on Bhattacharyya bound on error rate (P-e) to determine an optimal multiplexed measurement that simultaneously utilizes all available sources in a single exposure. With limited total resources of 10(6) photons, the multiplexed measurement provides a 41.8x reduction in P-e relative to the single-source measurement. In the second part, we consider multiple exposures and develop an adaptive measurement strategy for x-ray threat detection. Using the adaptive strategy, we design the next measurement based on information retrieved from previous measurements. We determine both optimal "next measurement" and stopping criterion to insure a target P-e using sequential hypothesis testing framework. With adaptive single-source measurements, we can reduce P-e by a factor of 40x relative to the measurements employing all sources in sequence. We also observe that there is a trade-off between measurement SNR and number of detectors when we study the performance of systems with reduced detector numbers. (C) 2016 Society of Photo-Optical Instrumentation Engineers (SPIE)
6

Volumetric imaging across spatiotemporal scales in biology with fluorescence microscopy

Sims, Ruth Rebecca January 2019 (has links)
Quantitative three dimensional maps of cellular structure, activity and function provide the key to answering many prevalent questions in modern biological research. Fluorescence microscopy has emerged as an indispensable tool in generating such maps, but common techniques are limited by fundamental physical constraints which render them incapable of simultaneously achieving high spatial and temporal resolution. This thesis will describe the development of novel microscopy techniques and complementary computational tools capable of addressing some of the aforementioned limitations of fluorescence microscopy and further outline their application in providing novel biological insights. The first section details the design of a light sheet microscope capable of high-throughput imaging of cleared, macroscopic samples with cellular resolution. In light sheet microscopy, the combination of spatially confined illumination with widefield detection enables multi-megapixel acquisition in a single camera exposure. The corresponding increase in acquisition speed enables systems level biological studies to be performed. The ability of this microscope to perform rapid, high-resolution imaging of intact samples is demonstrated by its application in a project which established a niche and hierarchy for stem cells in the adult nervous system. Light sheet microscopy achieves fast volumetric imaging rates, but the two dimensional nature of each measurement results in an inevitable lag between acquisition of the initial and final planes. The second section of this thesis describes the development and optimization of a light field microscope which captures volumetric information in a snapshot. Light field microscopy is a computational technique and images are reconstructed from raw data. Both the fidelity of computed volumes and the efficiency of the algorithms are strongly dependent on the quality of the rectification. A highly accurate, automated procedure is presented in this section. Light field reconstruction techniques are investigated and compared and the results are used to inform the re-design of the microscope. The new optical configuration is demonstrated to minimize the long-object problem. In the final section of the thesis, the spatial resolution limits of light field microscopy are explored using a combination of simulations and experiments. It is shown that light field microscopy is capable of localizing point sources over a large depth of field with high axial and lateral precision. Notably, this work paves the way towards frame rate limited super resolution localization microscopy with a depth of field larger than the thickness of a typical mammalian cell.
7

Active Illumination for the RealWorld

Achar, Supreeth 01 July 2017 (has links)
Active illumination systems use a controllable light source and a light sensor to measure properties of a scene. For such a system to work reliably across a wide range of environments it must be able to handle the effects of global light transport, bright ambient light, interference from other active illumination devices, defocus, and scene motion. The goal of this thesis is to develop computational techniques and hardware arrangements to make active illumination devices based on commodity-grade components that work under real world conditions. We aim to combine the robustness of a scanning laser rangefinder with the speed, measurement density, compactness, and economy of a consumer depth camera. Towards this end, we have made four contributions. The first is a computational technique for compensating for the effects of motion while separating the direct and global components of illumination. The second is a method that combines triangulation and depth from illumination defocus cues to increase the working range of a projector-camera system. The third is a new active illumination device that can efficiently image the epipolar component of light transport between a source and sensor. The device can measure depth using active stereo or structured light and is robust to many global light transport effects. Most importantly, it works outdoors in bright sunlight despite using a low power source. Finally, we extend the proposed epipolar-only imaging technique to time-of-flight sensing and build a low-power sensor that is robust to sunlight, global illumination, multi-device interference, and camera shake. We believe that the algorithms and sensors proposed and developed in this thesis could find applications in a diverse set of fields including mobile robotics, medical imaging, gesture recognition, and agriculture.
8

Occluder-aided non-line-of-sight imaging

Saunders, Charles 27 September 2021 (has links)
Non-line-of-sight (NLOS) imaging is the inference of the properties of objects or scenes outside of the direct line-of-sight of the observer. Such inferences can range from a 2D photograph-like image of a hidden area, to determining the position, motion or number of hidden objects, to 3D reconstructions of a hidden volume. NLOS imaging has many enticing potential applications, such as leveraging the existing hardware in many automobiles to identify hidden pedestrians, vehicles or other hazards and hence plan safer trajectories. Other potential application areas include improving navigation for robots or drones by anticipating occluded hazards, peering past obstructions in medical settings, or in surveying unreachable areas in search-and-rescue operations. Most modern NLOS imaging methods fall into one of two categories: active imaging methods that have some control of the illumination of the hidden area, and passive methods that simply measure light that already exists. This thesis introduces two NLOS imaging methods, one of each category, along with modeling and data processing techniques that are more broadly applicable. The methods are linked by their use of objects (‘occluders’) that reside somewhere between the observer and the hidden scene and block some possible light paths. Computational periscopy, a passive method, can recover the unknown position of an occluding object in the hidden area and then recover an image of the hidden scene behind it. It does so using only a single photograph of a blank relay wall taken by an ordinary digital camera. We develop also a framework using an optimized preconditioning matrix to improve the speed at which these reconstructions can be made and greatly improve the robustness to ambient light. Lastly, we develop tools necessary to demonstrate recovery of scenes at multiple unknown depths – paving the way towards three-dimensional reconstructions. Edge-resolved transient imaging, an active method, enables the formation of 2.5D representations – a plan view plus heights – of large-scale scenes. A pulsed laser illuminates spots along a small semi-circle on the floor, centered on the edge of a vertical wall such as in a doorway. The wall edge occludes some light paths, only allowing the laser light reflecting off of the floor to illuminate certain portions of the hidden area beyond the wall, depending on where along the semi-circle it is illuminating. The time at which photons return following a laser pulse is recorded. The occluding wall edge provides angular resolution, and time-resolved sensing provides radial resolution. This novel acquisition strategy, along with a scene response model and reconstruction algorithm, allow for 180° field of view reconstructions of large-scale scenes unlike other active imaging methods. Lastly, we introduce a sparsity penalty named mutually exclusive group sparsity (MEGS), that can be used as a constraint or regularization in optimization problems to promote solutions in which certain components are mutually exclusive. We explore how this penalty relates to other similar penalties, develop fast algorithms to solve MEGS-regularized problems, and demonstrate how enforcing mutual exclusivity structure can provide great utility in NLOS imaging problems.
9

<b>Single Shot Exposure Bracketing for High-Dynamic Range Imaging using a Multifunctional Metasurface</b>

Charles Thomas Brookshire (18396522) 17 April 2024 (has links)
<p dir="ltr">We propose a hardware driven solution to high dynamic range (HDR) imaging in the form of a single metasurface lens. Our design consists of a metasurface capable of forming nine low dynamic range (LDR) sub-images of varying intensities scaling by a factor of 2 onto an imaging sensor. After synthetically verifying the functionality of our design, the metasurface is fabricated and a prototype system is constructed for real world experiments. Utilizing the experimental system, the compatibility of our extracted LDR sub- images with pre-existing exposure bracketing solutions for multi-image HDR fusion is demonstrated. The resulting HDR images are highly robust to scene motion due to the instantaneous capture of multi-exposure LDR sub-images allowing for HDR video capabilities.</p>
10

Temporal Coding of Volumetric Imagery

Llull, Patrick Ryan January 2016 (has links)
<p>'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. </p><p>This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. </p><p>Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level.</p><p>Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,\lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions.</p><p>Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke.</p><p>Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.</p> / Dissertation

Page generated in 0.1371 seconds