• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 11
  • 1
  • Tagged with
  • 45
  • 45
  • 19
  • 16
  • 15
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A Task-Specific Approach to Computational Imaging System Design

Ashok, Amit January 2008 (has links)
The traditional approach to imaging system design places the sole burden of image formation on optical components. In contrast, a computational imaging system relies on a combination of optics and post-processing to produce the final image and/or output measurement. Therefore, the joint-optimization (JO) of the optical and the post-processing degrees of freedom plays a critical role in the design of computational imaging systems. The JO framework also allows us to incorporate task-specific performance measures to optimize an imaging system for a specific task. In this dissertation, we consider the design of computational imaging systems within a JO framework for two separate tasks: object reconstruction and iris-recognition. The goal of these design studies is to optimize the imaging system to overcome the performance degradations introduced by under-sampled image measurements. Within the JO framework, we engineer the optical point spread function (PSF) of the imager, representing the optical degrees of freedom, in conjunction with the post-processing algorithm parameters to maximize the task performance. For the object reconstruction task, the optimized imaging system achieves a 50% improvement in resolution and nearly 20% lower reconstruction root-mean-square-error (RMSE ) as compared to the un-optimized imaging system. For the iris-recognition task, the optimized imaging system achieves a 33% improvement in false rejection ratio (FRR) for a fixed alarm ratio (FAR) relative to the conventional imaging system. The effect of the performance measures like resolution, RMSE, FRR, and FAR on the optimal design highlights the crucial role of task-specific design metrics in the JO framework. We introduce a fundamental measure of task-specific performance known as task-specific information (TSI), an information-theoretic measure that quantifies the information content of an image measurement relevant to a specific task. A variety of source-models are derived to illustrate the application of a TSI-based analysis to conventional and compressive imaging (CI) systems for various tasks such as target detection and classification. A TSI-based design and optimization framework is also developed and applied to the design of CI systems for the task of target detection, it yields a six-fold performance improvement over the conventional imaging system at low signal-to-noise ratios.
22

Coding Strategies and Implementations of Compressive Sensing

Tsai, Tsung-Han January 2016 (has links)
<p>This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. </p><p>This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. </p><p>Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. </p><p>Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.</p> / Dissertation
23

Computational Optical Imaging Systems for Spectroscopy and Wide Field-of-View Gigapixel Photography

Kittle, David S. January 2013 (has links)
<p>This dissertation explores computational optical imaging methods to circumvent the physical limitations of classical sensing. An ideal imaging system would maximize resolution in time, spectral bandwidth, three-dimensional object space, and polarization. Practically, increasing any one parameter will correspondingly decrease the others.</p><p>Spectrometers strive to measure the power spectral density of the object scene. Traditional pushbroom spectral imagers acquire high resolution spectral and spatial resolution at the expense of acquisition time. Multiplexed spectral imagers acquire spectral and spatial information at each instant of time. Using a coded aperture and dispersive element, the coded aperture snapshot spectral imagers (CASSI) here described leverage correlations between voxels in the spatial-spectral data cube to compressively sample the power spectral density with minimal loss in spatial-spectral resolution while maintaining high temporal resolution.</p><p>Photography is limited by similar physical constraints. Low f/# systems are required for high spatial resolution to circumvent diffraction limits and allow for more photon transfer to the film plain, but require larger optical volumes and more optical elements. Wide field systems similarly suffer from increasing complexity and optical volume. Incorporating a multi-scale optical system, the f/#, resolving power, optical volume and wide field of view become much less coupled. This system uses a single objective lens that images onto a curved spherical focal plane which is relayed by small micro-optics to discrete focal planes. Using this design methodology allows for gigapixel designs at low f/# that are only a few pounds and smaller than a one-foot hemisphere.</p><p>Computational imaging systems add the necessary step of forward modeling and calibration. Since the mapping from object space to image space is no longer directly readable, post-processing is required to display the required data. The CASSI system uses an undersampled measurement matrix that requires inversion while the multi-scale camera requires image stitching and compositing methods for billions of pixels in the image. Calibration methods and a testbed are demonstrated that were developed specifically for these computational imaging systems.</p> / Dissertation
24

Sampling and Signal Estimation in Computational Optical Sensors

Shankar, Mohan 14 December 2007 (has links)
Computational sensing utilizes non-conventional sampling mechanisms along with processing algorithms for accomplishing various sensing tasks. It provides additional flexibility in designing imaging or spectroscopic systems. This dissertation analyzes sampling and signal estimation techniques through three computational sensing systems to accomplish specific tasks. The first is thin long-wave infrared imaging systems through multichannel sampling. Significant reduction in optical system thickness is obtained over a conventional system by modifying conventional sampling mechanisms and applying reconstruction algorithms. In addition, an information theoretic analysis of sampling in conventional as well as multichannel imaging systems is also performed. The feasibility of performing multichannel sampling for imaging is demonstrated using an information theoretic metric. The second system is an application of the multichannel system for the design of compressive low-power video sensors. Two sampling schemes have been demonstrated that utilize spatial as well as temporal aliasing. The third system is a novel computational spectroscopic system for detecting chemicals that utilizes the surface plasmon resonances to encode information about the chemicals that are tested. / Dissertation
25

Computational Imaging For Miniature Cameras

Salahieh, Basel January 2015 (has links)
Miniature cameras play a key role in numerous imaging applications ranging from endoscopy and metrology inspection devices to smartphones and head-mount acquisition systems. However, due to the physical constraints, the imaging conditions, and the low quality of small optics, their imaging capabilities are limited in terms of the delivered resolution, the acquired depth of field, and the captured dynamic range. Computational imaging jointly addresses the imaging system and the reconstructing algorithms to bypass the traditional limits of optical systems and deliver better restorations for various applications. The scene is encoded into a set of efficient measurements which could then be computationally decoded to output a richer estimate of the scene as compared with the raw images captured by conventional imagers. In this dissertation, three task-based computational imaging techniques are developed to make low-quality miniature cameras capable of delivering realistic high-resolution reconstructions, providing full-focus imaging, and acquiring depth information for high dynamic range objects. For the superresolution task, a non-regularized direct superresolution algorithm is developed to achieve realistic restorations without being penalized by improper assumptions (e.g., optimizers, priors, and regularizers) made in the inverse problem. An adaptive frequency-based filtering scheme is introduced to upper bound the reconstruction errors while still producing more fine details as compared with previous methods under realistic imaging conditions. For the full-focus imaging task, a computational depth-based deconvolution technique is proposed to bring a scene captured by an ordinary fixed-focus camera to a full-focus based on a depth-variant point spread function prior. The ringing artifacts are suppressed on three levels: block tiling to eliminate boundary artifacts, adaptive reference maps to reduce ringing initiated by sharp edges, and block-wise deconvolution or depth-based masking to suppress artifacts initiated by neighboring depth-transition surfaces. Finally for the depth acquisition task, a multi-polarization fringe projection imaging technique is introduced to eliminate saturated points and enhance the fringe contrast by selecting the proper polarized channel measurements. The developed technique can be easily extended to include measurements captured under different exposure times to obtain more accurate shape rendering for very high dynamic range objects.
26

Computational Optical Imaging Systems: Sensing Strategies, Optimization Methods, and Performance Bounds

Harmany, Zachary Taylor January 2012 (has links)
<p>The emerging theory of compressed sensing has been nothing short of a revolution in signal processing, challenging some of the longest-held ideas in signal processing and leading to the development of exciting new ways to capture and reconstruct signals and images. Although the theoretical promises of compressed sensing are manifold, its implementation in many practical applications has lagged behind the associated theoretical development. Our goal is to elevate compressed sensing from an interesting theoretical discussion to a feasible alternative to conventional imaging, a significant challenge and an exciting topic for research in signal processing. When applied to imaging, compressed sensing can be thought of as a particular case of computational imaging, which unites the design of both the sensing and reconstruction of images under one design paradigm. Computational imaging tightly fuses modeling of scene content, imaging hardware design, and the subsequent reconstruction algorithms used to recover the images. </p><p>This thesis makes important contributions to each of these three areas through two primary research directions. The first direction primarily attacks the challenges associated with designing practical imaging systems that implement incoherent measurements. Our proposed snapshot imaging architecture using compressive coded aperture imaging devices can be practically implemented, and comes equipped with theoretical recovery guarantees. It is also straightforward to extend these ideas to a video setting where careful modeling of the scene can allow for joint spatio-temporal compressive sensing. The second direction develops a host of new computational tools for photon-limited inverse problems. These situations arise with increasing frequency in modern imaging applications as we seek to drive down image acquisition times, limit excitation powers, or deliver less radiation to a patient. By an accurate statistical characterization of the measurement process in optical systems, including the inherent Poisson noise associated with photon detection, our class of algorithms is able to deliver high-fidelity images with a fraction of the required scan time, as well as enable novel methods for tissue quantification from intraoperative microendoscopy data. In short, the contributions of this dissertation are diverse, further the state-of-the-art in computational imaging, elevate compressed sensing from an interesting theory to a practical imaging methodology, and allow for effective image recovery in light-starved applications.</p> / Dissertation
27

Pupil engineering in a miniaturized fluorescent microscopy platform using binary diffractive optics

Greene, Joseph Lewis 07 October 2019 (has links)
There is an unprecedented need in neuroscience and medical research for the precise imaging of individual neurons and their interconnectivity in an effort to achieve a more complete understanding of neurological illness and cognitive growth. While several imaging architectures successfully detect active neural tissue, fluorescent imaging through head-mounted microscopes is becoming a standard method of imaging neural circuitry in freely behaving animals. At Boston University, the Gardner Group developed a miniaturized, open-source, single-photon ‘finch-scope’ to spur rapid prototyping in head-mounted miniscope technology. While experimentally convenient, the finch-scope and other miniscope platforms are limited by their native depth of field and may only detect a thin layer of active neurons in a neurological volume. In this Master’s Thesis Project, I will investigate utilizing optical phase masks integrated in the Fourier plane of the finch-scope to invoke a less-diffractive Bessel point spread function. Next, I will experimentally justify the extended depth of field nature of these phase masks by imaging the axial profile of a 10μm fluorescent pinhole object with a modified finch-scope.
28

Computer vision at low light

Abhiram Gnanasambandam (12863432) 14 June 2022 (has links)
<p>Imaging in low light is difficult because the number of photons arriving at the image sensor is low. This is a major technological challenge for applications such as surveillance and autonomous driving. Conventional CMOS image sensors (CIS) circumvent this issue by using techniques such as burst photography. However, this process is slow and it does not solve the underlying problem that the CIS cannot efficiently capture the signals arriving at the sensors. This dissertation focuses on solving this problem using a combination of better image sensors (Quanta Image Sensors) and computational imaging techniques.</p> <p><br></p> <p>The first part of the thesis involves understanding how the quanta image sensors work and how they can be used to solve the low light imaging problem. The second part is about the algorithms that can deal with images obtained in low light. The contributions in this part include – 1. Understanding and proposing solutions for the Poisson noise model, 2. Proposing a new machine learning scheme called student-teacher learning for helping neural networks deal with noise, and 3. Developing solutions that work not only for low light but also for a wide range of signal and noise levels. Using the ideas, we can solve a variety of applications in low light, such as color imaging, dynamic scene reconstruction, deblurring, and object detection.</p>
29

Co-conception des systemes optiques avec masques de phase pour l'augmentation de la profondeur du champ : evaluation du performance et contribution de la super-résolution / Co-design of optical systems with phase masks for depth of field extension : performance evaluation and contribution of superresolution

Falcon Maimone, Rafael 19 October 2017 (has links)
Les masques de phase sont des dispositifs réfractifs situés généralement au niveau de la pupille d’un système optique pour en modifier la réponse impulsionnelle (PSF en anglais), par une technique habituellement connue sous le nom de codage de front d’onde. Ces masques peuvent être utilisés pour augmenter la profondeur du champ (DoF en anglais) des systèmes d’imagerie sans diminuer la quantité de lumière qui entre dans le système, en produisant une PSF ayant une plus grande invariance à la défocalisation. Cependant, plus le DoF est grand plus l’image acquise est floue et une opération de déconvolution doit alors lui être appliquée. Par conséquent, la conception des masques de phase doit prendre en compte ce traitement pour atteindre le compromis optimal entre invariance de la PSF à la défocalisation et qualité de la déconvolution.. Cette approche de conception conjointe a été introduite par Cathey et Dowski en 1995 et affinée en 2002 pour des masques de phase continus puis généralisée par Robinson et Stork en 2007 pour la correction d’autres aberrations optiques.Dans cette thèse sont abordés les différents aspects de l’optimisation des masques de phase pour l’augmentation du DoF, tels que les critères de performance et la relation entre ces critères et les paramètres des masques. On utilise la « qualité d’image » (IQ en anglais), une méthode basée sur l’écart quadratique moyen définie par Diaz et al., pour la co-conception des divers masques de phase et pour évaluer leur performance. Nous évaluons ensuite la pertinence de ce critère IQ en comparaison d’autres métriques de conception optique, comme par exemple le rapport de Strehl ou la fonction de transfert de modulation (MTF en anglais). Nous nous concentrons en particulier sur les masques de phase annulaires binaires, l’étude de leur performance pour différents cas comme l’augmentation du DoF, la présence d’aberrations ou l’impact du nombre de paramètres d’optimisation.Nous appliquons ensuite les outils d’analyse exploités pour les masques binaires aux masques de phase continus qui apparaissent communément dans la littérature, comme les masques de phase polynomiaux. Nous avons comparé de manière approfondie ces masques entre eux et aux masques binaires, non seulement pour évaluer leurs avantages, mais aussi parce qu’en analysant leurs différences il est possible de comprendre leurs propriétésLes masques de phase fonctionnent comme des filtres passe-bas sur des systèmes limités par la diffraction, réduisant en pratique les phénomènes de repliement spectral. D’un autre côté, la technique de reconstruction connue sous l’appellation de « superresolution » utilise des images d’une même scène perturbées par du repliement de spectre pour augmenter la résolution du système optique original. Les travaux réalisés durant une période de détachement chez le partenaire industriel de la thèse, KLA-Tencor à Louvain, Belgique, illustrent le propos. A la fin du manuscrit nous étudions la pertinence de la combinaison de cette technique avec l’utilisation de masques de phase pour l’augmentation du DoF. / Phase masks are wavefront encoding devices typically situated at the aperture stop of an optical system to engineer its point spread function (PSF) in a technique commonly known as wavefront coding. These masks can be used to extend the depth of field (DoF) of imaging systems without reducing the light throughput by producing a PSF that becomes more invariant to defocus; however, the larger the DoF the more blurred the acquired raw image so that deconvolution has to be applied on the captured images. Thus, the design of the phase masks has to take into account image processing in order to reach the optimal compromise between invariance of PSF to defocus and capacity to deconvolve the image. This joint design approach has been introduced by Cathey and Dowski in 1995 and refined in 2002 for continuous-phase DoF enhancing masks and generalized by Robinson and Stork in 2007 to correct other optical aberrations.In this thesis we study the different aspects of phase mask optimization for DoF extension, such as the different performance criteria and the relation of these criteria with the different mask parameters. We use the so-called image quality (IQ), a mean-square error based criterion defined by Diaz et al., to co-design different phase masks and evaluate their performance. We then compare the relevance of the IQ criterion against other optical design metrics, such as the Strehl ratio, the modulation transfer function (MTF) and others. We focus in particular on the binary annular phase masks, their performance for various conditions, such as the desired DoF range, the number of optimization parameters, presence of aberrations and others.We use then the analysis tools used for the binary phase masks for continuous-phase masks that appear commonly in the literature, such as the polynomial-phase masks. We extensively compare these masks to each other and the binary masks, not only to assess their benefits, but also because by analyzing their differences we can understand their properties.Phase masks function as a low-pass filter on diffraction limited systems, effectively reducing aliasing. On the other hand, the signal processing technique known as superresolution uses several aliased frames of the same scene to enhance the resolution of the final image beyond the sampling resolution of the original optical system. Practical examples come from the works made during a secondment with the industrial partner KLA-Tencor in Leuven, Belgium. At the end of the manuscript we study the relevance of using such a technique alongside phase masks for DoF extension.
30

Image Restoration Methods for Imaging through Atmospheric Turbulence

Zhiyuan Mao (15209827) 12 April 2023 (has links)
<p> The performance of long-range imaging systems often suffers due to the presence of atmospheric turbulence. One way to alleviate the degradation caused by atmospheric turbulence is to apply post-processing mitigation algorithms, where a high-quality frame is reconstructed from a single degraded image or a sequence of degraded frames. The image processing algorithms for atmospheric turbulence mitigation have been studied for decades, yet some critical problems remain open.</p> <p><br></p> <p>This dissertation addresses the problem of image reconstruction through atmospheric turbulence from three unique perspectives: 1) Reconstruction with the presence of moving objects using an improved classical image processing pipeline. 2) A fast simulation scheme for efficiently generating large-scale turbulence-degraded datasets for training deep neural networks. 3) A deep learning-based single-frame reconstruction method using Vision Transformer. </p>

Page generated in 0.1018 seconds