• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 5
  • 1
  • Tagged with
  • 34
  • 34
  • 19
  • 16
  • 12
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Computer vision at low light

Abhiram Gnanasambandam (12863432) 14 June 2022 (has links)
<p>Imaging in low light is difficult because the number of photons arriving at the image sensor is low. This is a major technological challenge for applications such as surveillance and autonomous driving. Conventional CMOS image sensors (CIS) circumvent this issue by using techniques such as burst photography. However, this process is slow and it does not solve the underlying problem that the CIS cannot efficiently capture the signals arriving at the sensors. This dissertation focuses on solving this problem using a combination of better image sensors (Quanta Image Sensors) and computational imaging techniques.</p> <p><br></p> <p>The first part of the thesis involves understanding how the quanta image sensors work and how they can be used to solve the low light imaging problem. The second part is about the algorithms that can deal with images obtained in low light. The contributions in this part include – 1. Understanding and proposing solutions for the Poisson noise model, 2. Proposing a new machine learning scheme called student-teacher learning for helping neural networks deal with noise, and 3. Developing solutions that work not only for low light but also for a wide range of signal and noise levels. Using the ideas, we can solve a variety of applications in low light, such as color imaging, dynamic scene reconstruction, deblurring, and object detection.</p>
22

Co-conception des systemes optiques avec masques de phase pour l'augmentation de la profondeur du champ : evaluation du performance et contribution de la super-résolution / Co-design of optical systems with phase masks for depth of field extension : performance evaluation and contribution of superresolution

Falcon Maimone, Rafael 19 October 2017 (has links)
Les masques de phase sont des dispositifs réfractifs situés généralement au niveau de la pupille d’un système optique pour en modifier la réponse impulsionnelle (PSF en anglais), par une technique habituellement connue sous le nom de codage de front d’onde. Ces masques peuvent être utilisés pour augmenter la profondeur du champ (DoF en anglais) des systèmes d’imagerie sans diminuer la quantité de lumière qui entre dans le système, en produisant une PSF ayant une plus grande invariance à la défocalisation. Cependant, plus le DoF est grand plus l’image acquise est floue et une opération de déconvolution doit alors lui être appliquée. Par conséquent, la conception des masques de phase doit prendre en compte ce traitement pour atteindre le compromis optimal entre invariance de la PSF à la défocalisation et qualité de la déconvolution.. Cette approche de conception conjointe a été introduite par Cathey et Dowski en 1995 et affinée en 2002 pour des masques de phase continus puis généralisée par Robinson et Stork en 2007 pour la correction d’autres aberrations optiques.Dans cette thèse sont abordés les différents aspects de l’optimisation des masques de phase pour l’augmentation du DoF, tels que les critères de performance et la relation entre ces critères et les paramètres des masques. On utilise la « qualité d’image » (IQ en anglais), une méthode basée sur l’écart quadratique moyen définie par Diaz et al., pour la co-conception des divers masques de phase et pour évaluer leur performance. Nous évaluons ensuite la pertinence de ce critère IQ en comparaison d’autres métriques de conception optique, comme par exemple le rapport de Strehl ou la fonction de transfert de modulation (MTF en anglais). Nous nous concentrons en particulier sur les masques de phase annulaires binaires, l’étude de leur performance pour différents cas comme l’augmentation du DoF, la présence d’aberrations ou l’impact du nombre de paramètres d’optimisation.Nous appliquons ensuite les outils d’analyse exploités pour les masques binaires aux masques de phase continus qui apparaissent communément dans la littérature, comme les masques de phase polynomiaux. Nous avons comparé de manière approfondie ces masques entre eux et aux masques binaires, non seulement pour évaluer leurs avantages, mais aussi parce qu’en analysant leurs différences il est possible de comprendre leurs propriétésLes masques de phase fonctionnent comme des filtres passe-bas sur des systèmes limités par la diffraction, réduisant en pratique les phénomènes de repliement spectral. D’un autre côté, la technique de reconstruction connue sous l’appellation de « superresolution » utilise des images d’une même scène perturbées par du repliement de spectre pour augmenter la résolution du système optique original. Les travaux réalisés durant une période de détachement chez le partenaire industriel de la thèse, KLA-Tencor à Louvain, Belgique, illustrent le propos. A la fin du manuscrit nous étudions la pertinence de la combinaison de cette technique avec l’utilisation de masques de phase pour l’augmentation du DoF. / Phase masks are wavefront encoding devices typically situated at the aperture stop of an optical system to engineer its point spread function (PSF) in a technique commonly known as wavefront coding. These masks can be used to extend the depth of field (DoF) of imaging systems without reducing the light throughput by producing a PSF that becomes more invariant to defocus; however, the larger the DoF the more blurred the acquired raw image so that deconvolution has to be applied on the captured images. Thus, the design of the phase masks has to take into account image processing in order to reach the optimal compromise between invariance of PSF to defocus and capacity to deconvolve the image. This joint design approach has been introduced by Cathey and Dowski in 1995 and refined in 2002 for continuous-phase DoF enhancing masks and generalized by Robinson and Stork in 2007 to correct other optical aberrations.In this thesis we study the different aspects of phase mask optimization for DoF extension, such as the different performance criteria and the relation of these criteria with the different mask parameters. We use the so-called image quality (IQ), a mean-square error based criterion defined by Diaz et al., to co-design different phase masks and evaluate their performance. We then compare the relevance of the IQ criterion against other optical design metrics, such as the Strehl ratio, the modulation transfer function (MTF) and others. We focus in particular on the binary annular phase masks, their performance for various conditions, such as the desired DoF range, the number of optimization parameters, presence of aberrations and others.We use then the analysis tools used for the binary phase masks for continuous-phase masks that appear commonly in the literature, such as the polynomial-phase masks. We extensively compare these masks to each other and the binary masks, not only to assess their benefits, but also because by analyzing their differences we can understand their properties.Phase masks function as a low-pass filter on diffraction limited systems, effectively reducing aliasing. On the other hand, the signal processing technique known as superresolution uses several aliased frames of the same scene to enhance the resolution of the final image beyond the sampling resolution of the original optical system. Practical examples come from the works made during a secondment with the industrial partner KLA-Tencor in Leuven, Belgium. At the end of the manuscript we study the relevance of using such a technique alongside phase masks for DoF extension.
23

Interferometric reflectance microscopy for physical and chemical characterization of biological nanoparticles

Yurdakul, Celalettin 27 September 2021 (has links)
Biological nanoparticles have enormous utility as well as potential adverse impacts in biotechnology, human health, and medicine. The physical and chemical properties of these nanoparticles have strong implications on their distribution, circulation, and clearance in vivo. Accurate morphological visualization and chemical characterization of nanoparticles by label-free (direct) optical microscopy would provide valuable insights into their natural and intrinsic properties. However, three major challenges related to label-free nanoparticle imaging must be overcome: (i) weak contrast due to exceptionally small size and low-refractive-index difference with the surrounding medium, (ii) inadequate spatial resolution to discern nanoscale features, and (iii) lack of chemical specificity. Advances in common-path interferometric microscopy have successfully overcome the weak contrast limitation and enabled direct detection of low-index biological nanoparticles down to single proteins. However, interferometric light microscopy does not overcome the diffraction limit, and studying the nanoparticle morphology at sub-wavelength spatial resolution remains a significant challenge. Moreover, chemical signature and composition are inaccessible in these interferometric optical measurements. This dissertation explores innovations in common-path interferometric microscopy to provide enhanced spatial resolution and chemical specificity in high-throughput imaging of individual nanoparticles. The dissertation research effort focuses on a particular modality of interferometric imaging, termed “single-particle interferometric reflectance (SPIR) microscopy”, that uses an oxide-coated silicon substrate for enhanced coherent detection of the weakly scattered light. We seek to advance three specific aspects of SPIR microscopy: sensitivity, spatial resolution, and chemical specificity. The first one is to enhance particle visibility via novel optical and computational methods that push optical detection sensitivity. The second one is to improve the lateral resolution beyond the system's classical limit by a new computational imaging method with an engineered illumination function that accesses high-resolution spatial information at the nanoscale. The last one is to extract a distinctive chemical signature by probing the mid-infrared absorption-induced photothermal effect. To realize these goals, we introduce new theoretical models and experimental concepts. This dissertation makes the following four major contributions in the wide-field common-path interferometric microscopy field: (1) formulating vectorial-optics based linear forward model that describes interferometric light scattering near planar interfaces in the quasi-static limit, (2) developing computationally efficient image reconstruction methods from defocus images to detect a single 25 nm dielectric nanoparticle, (3) developing asymmetric illumination based computational microscopy methods to achieve direct morphological visualization of nanoparticles at 150 nm, and (4) developing bond-selective interferometric microscopy to enable multispectral chemical imaging of sub-wavelength nanoparticles in the vibrational fingerprint region. Collectively, through these research projects, we demonstrate significant advancement in the wide-field common-path interferometric microscopy field to achieve high-resolution and accurate visualization and chemical characterization of a broad size range of individual biological nanoparticles with high sensitivity.
24

Edge-resolved non-line-of-sight imaging

Seidel, Sheila W. 17 January 2023 (has links)
Over the past decade, the possibility of forming images of objects hidden from line-of-sight (LOS) view has emerged as an intriguing and potentially important expansion of computational imaging and computer vision technology. This capability could help soldiers anticipate danger in a tunnel system, autonomous vehicles avoid collision, and first responders safely traverse a building. In many scenarios where non-line-of-sight (NLOS) vision is desired, the LOS view is obstructed by a wall with a vertical edge. In this thesis we show that through modeling and computation, the impediment to LOS itself can be exploited for enhanced resolution of the hidden scene. NLOS methods may be active, where controlled illumination of the hidden scene is used, or passive, relying only on already present light sources. In both active and passive NLOS imaging, measured light returns to the sensor after multiple diffuse bounces. Each bounce scatters light in all directions, eliminating directional information. When the scene is hidden behind a wall with a vertical edge, that edge occludes light as a function of its incident azimuthal angle around the edge. Measurements acquired on the floor adjacent to the occluding edge thus contain rich azimuthal information about the hidden scene. In this thesis, we explore several edge-resolved NLOS imaging systems that exploit the occlusion provided by a vertical edge. In addition to demonstrating novel edge-resolved NLOS imaging systems with real experimental data, this thesis includes modeling, performance bound analyses, and inversion algorithms for the proposed systems. We first explore the use of a single vertical edge to form a 1D (in azimuthal angle) reconstruction of the hidden scene. Prior work demonstrated that temporal variation in a video of the floor may be used to image moving components of the hidden scene. In contrast, our algorithm reconstructs both moving and stationary hidden scenery from a single photograph, without assuming uniform floor albedo. We derive a forward model that describes the measured photograph as a nonlinear combination of the unknown floor albedo and the light from behind the wall. The inverse problem, which is the joint estimation of floor albedo and a 1D reconstruction of the hidden scene, is solved via optimization, where we introduce regularizers that help separate light variations in the measured photograph due to floor pattern and hidden scene, respectively. Next, we combine the resolving power of a vertical edge with information from the relationship between intensity and radial distance to form 2D reconstructions from a single passive photograph. We derive a new forward model, accounting for radial falloff, and propose two inversion algorithms to form 2D reconstructions from a single photograph of the penumbra. The performances of both algorithms are demonstrated on experimental data corresponding to several different hidden scene configurations. A Cramer-Rao bound analysis further demonstrates the feasibility and limitations of this 2D corner camera. Our doorway camera exploits the occlusion provided by the two vertical edges of a doorway for more robust 2D reconstruction of the hidden scene. This work provides and demonstrates a novel inversion algorithm to jointly estimate two views of change in the hidden scene, using the temporal difference between photographs acquired on the visible side of the doorway. A Cramer-Rao bound analysis is used to demonstrate the 2D resolving power of the doorway camera over other passive acquisition strategies and to motivate the novel biangular reconstruction grid. Lastly, we present the active corner camera. Most existing active NLOS methods illuminate the hidden scene using a pulsed laser directed at a relay surface and collect time-resolved measurements of returning light. The prevailing approaches are inherently limited by the need for laser scanning, a process that is generally too slow to image hidden objects in motion. Methods that avoid laser scanning track the moving parts of the hidden scene as one or two point targets. In this work, based on more complete optical response modeling yet still without multiple illumination positions, we demonstrate accurate reconstructions of objects in motion and a `map’ of the stationary scenery behind them. This new ability to count, localize, and characterize the sizes of hidden objects in motion, combined with mapping of the stationary hidden scene could greatly improve indoor situational awareness in a variety of applications.
25

Augmenting label-free imaging modalities with deep learning based digital staining

Cheng, Shiyi 30 August 2023 (has links)
Label-free imaging modalities offer numerous advantages, such as the ability to avoid the time-consuming and potentially disruptive process of physical staining. However, one challenge that arises in label-free imaging is the limited ability to extract specific structural or molecular information from the acquired images. To overcome this limitation, a novel approach known as digital staining or digital labeling has emerged. Digital staining leverages the power of deep learning algorithms to virtually introduce labels or stains into label-free images, thereby enabling the extraction of detailed information that would typically require physical staining. The integration of digital staining with label-free imaging holds great promise in expanding the capabilities of imaging techniques, facilitating improved analysis, and advancing our understanding of biological systems at both the cellular and tissue level. In this thesis, I explore supervised and semi-supervised methodologies of digital staining and the applications in augmenting label-free imaging modalities, particularly in the context of cell imaging and brain imaging. In the first part of the thesis, I demonstrate the novel integration of multi-contrast dark-field reflectance microscopy and supervised deep learning to enable subcellular immunofluorescence labeling and cell cytometry from label-free imaging. By leveraging the rich structural information and sensitivity of reflectance microscopy, this method accurately predicts subcellular features without the need for physical staining. As a result of the use of a novel multi-contrast modality, the digital labeling approach demonstrates significant improvements over the state-of-the-art techniques, achieving up to 3× prediction accuracy. In addition to fluorescence prediction, the method successfully reproduces single-cell level structural phenotypes related to cell cycles. The multiplexed readouts obtained through digital labeling enable accurate multi-parametric single-cell profiling across a large cell population. In the second part, I investigated a novel digital staining optical coherence tomography (DS-OCT) modality combining advantages of serial sectioning OCT and semi-supervised deep learning and demonstrated several advantages for the application of 3D histological brain imaging. The DS model is trained using a semi-supervised learning framework that incorporates unpaired translation, a biophysical model, and cross-modality image registration, which manifests broad applicability to other weakly-paired bioimaging modalities. The DS model enables the translation of S-OCT images to Gallyas silver staining, providing consistent staining quality across different samples. I further show that DS enhances contrast across cortical layer boundaries and enables reliable cortical layer differentiation. Additionally, DS-OCT preserves 3D-geometry on centimeter-scale brain tissue blocks. My pilot study demonstrates promising results on other anatomical regions acquired from different S-OCT systems, highlighting its potential for generalization in various imaging contexts. Overall, I investigate the problems of augmenting label-free imaging modalities with deep learning generated digital stains. I explored both supervised and semi-supervised methods for building novel DS frameworks. My work showcased two important applications in the field of immunofluorescence cell imaging and 3D histological brain imaging. On the one hand, the integration of DS techniques with multi-contrast microscopy has the potential to enhance the throughput of single-cell imaging cytometry, and phenotyping. On the other hand, integrating DS techniques with S-OCT holds great potential for high-throughput human brain imaging, enabling comprehensive studies on the structure and function of the brain. Through the exploration, I aim to shed light on the impact of digital staining in the field of computational imaging and its implications for various scientific disciplines.
26

Coded Measurement for Imaging and Spectroscopy

Portnoy, Andrew David January 2009 (has links)
<p>This thesis describes three computational optical systems and their underlying coding strategies. These codes are useful in a variety of optical imaging and spectroscopic applications. Two multichannel cameras are described. They both use a lenslet array to generate multiple copies of a scene on the detector. Digital processing combines the measured data into a single image. The visible system uses focal plane coding, and the long wave infrared (LWIR) system uses shift coding. With proper calibration, the multichannel interpolation results recover contrast for targets at frequencies beyond the aliasing limit of the individual subimages. This thesis also describes a LWIR imaging system that simultaneously measures four wavelength channels each with narrow bandwidth. In this system, lenses, aperture masks, and dispersive optics implement a spatially varying spectral code.</p> / Dissertation
27

Digital Phase Correction of a Partially Coherent Sparse Aperture System

Krug, Sarah Elaine 27 August 2015 (has links)
No description available.
28

HIGH SPEED IMAGING VIA ADVANCED MODELING

Soumendu Majee (10942896) 04 August 2021 (has links)
<div>There is an increasing need to accurately image objects at a high temporal resolution for different applications in order to analyze the underlying physical, chemical, or biological processes. In this thesis, we use advanced models exploiting the image structure and the measurement process in order to achieve an improved temporal resolution. The thesis is divided into three chapters, each corresponding to a different imaging application.</div><div><br></div><div>In the first chapter, we propose a novel method to localize neurons in fluorescence microscopy images. Accurate localization of neurons enables us to scan only the neuron locations instead of the full brain volume and thus improve the temporal resolution of neuron activity monitoring. We formulate the neuron localization problem as an inverse problem where we reconstruct an image that encodes the location of the neuron centers. The sparsity of the neuron centers serves as a prior model, while the forward model comprises of shape models estimated from training data.</div><div><br></div><div>In the second chapter, we introduce multi-slice fusion, a novel framework to incorporate advanced prior models for inverse problems spanning many dimensions such as 4D computed tomography (CT) reconstruction. State of the art 4D reconstruction methods use model based iterative reconstruction (MBIR), but it depends critically on the quality of the prior modeling. Incorporating deep convolutional neural networks (CNNs) in the 4D reconstruction problem is difficult due to computational difficulties and lack of high-dimensional training data. Multi-Slice Fusion integrates the tomographic forward model with multiple low dimensional CNN denoisers along different planes to produce a 4D regularized reconstruction. The improved regularization in multi-slice fusion allows each time-frame to be reconstructed from fewer measurements, resulting in an improved temporal resolution in the reconstruction. Experimental results on sparse-view and limited-angle CT data demonstrate that Multi-Slice Fusion can substantially improve the quality of reconstructions relative to traditional methods, while also being practical to implement and train.</div><div><br></div><div>In the final chapter, we introduce CodEx, a synergistic combination of coded acquisition and a non-convex Bayesian reconstruction for improving acquisition speed in computed tomography (CT). In an ideal ``step-and-shoot'' tomographic acquisition, the object is rotated to each desired angle, and the view is taken. However, step-and-shoot acquisition is slow and can waste photons, so in practice the object typically rotates continuously in time, leading to views that are blurry. This blur can then result in reconstructions with severe motion artifacts. CodEx works by encoding the acquisition with a known binary code that the reconstruction algorithm then inverts. The CodEx reconstruction method uses the alternating direction method of multipliers (ADMM) to split the inverse problem into iterative deblurring and reconstruction sub-problems, making reconstruction practical. CodEx allows for a fast data acquisition leading to a good temporal resolution in the reconstruction.</div>
29

TIME-OF-FLIGHT NEUTRON CT FOR ISOTOPE DENSITY RECONSTRUCTION AND CONE-BEAM CT SEPARABLE MODELS

Thilo Balke (15348532) 26 April 2023 (has links)
<p>There is a great need for accurate image reconstruction in the context of non-destructive evaluation. Major challenges include the ever-increasing necessity for high resolution reconstruction with limited scan and reconstruction time and thus fewer and noisier measurements. In this thesis, we leverage advanced Bayesian modeling of the physical measurement process and probabilistic prior information of the image distribution in order to yield higher image quality despite limited measurement time. We demonstrate in several ways efficient computational performance through the exploitation of more efficient memory access, optimized parametrization of the system model, and multi-pixel parallelization. We demonstrate that by building high-fidelity forward models that we can generate quantitatively reliable reconstructions despite very limited measurement data.</p> <p><br></p> <p>In the first chapter, we introduce an algorithm for estimating isotopic densities from neutron time-of-flight imaging data. Energy resolved neutron imaging (ERNI) is an advanced neutron radiography technique capable of non-destructively extracting spatial isotopic information within a given material. Energy-dependent radiography image sequences can be created by utilizing neutron time-of-flight techniques. In combination with uniquely characteristic isotopic neutron cross-section spectra, isotopic areal densities can be determined on a per-pixel basis, thus resulting in a set of areal density images for each isotope present in the sample. By preforming ERNI measurements over several rotational views, an isotope decomposed 3D computed tomography is possible. We demonstrate a method involving a robust and automated background estimation based on a linear programming formulation. The extremely high noise due to low count measurements is overcome using a sparse coding approach. It allows for a significant computation time improvement, from weeks to a few hours compared to existing neutron evaluation tools, enabling at the present stage a semi-quantitative, user-friendly routine application. </p> <p><br></p> <p>In the second chapter, we introduce the TRINIDI algorithm, a more refined algorithm for the same problem.</p> <p>Accurate reconstruction of 2D and 3D isotope densities is a desired capability with great potential impact in applications such as evaluation and development of next-generation nuclear fuels.</p> <p>Neutron time-of-flight (TOF) resonance imaging offers a potential approach by exploiting the characteristic neutron adsorption spectra of each isotope.</p> <p>However, it is a major challenge to compute quantitatively accurate images due to a variety of confounding effects such as severe Poisson noise, background scatter, beam non-uniformity, absorption non-linearity, and extended source pulse duration. We present the TRINIDI algorithm which is based on a two-step process in which we first estimate the neutron flux and background counts, and then reconstruct the areal densities of each isotope and pixel.</p> <p>Both components are based on the inversion of a forward model that accounts for the highly non-linear absorption, energy-dependent emission profile, and Poisson noise, while also modeling the substantial spatio-temporal variation of the background and flux. </p> <p>To do this, we formulate the non-linear inverse problem as two optimization problems that are solved in sequence.</p> <p>We demonstrate on both synthetic and measured data that TRINIDI can reconstruct quantitatively accurate 2D views of isotopic areal density that can then be reconstructed into quantitatively accurate 3D volumes of isotopic volumetric density.</p> <p><br></p> <p>In the third chapter, we introduce a separable forward model for cone-beam computed tomography (CT) that enables efficient computation of a Bayesian model-based reconstruction. Cone-beam CT is an attractive tool for many kinds of non-destructive evaluation (NDE). Model-based iterative reconstruction (MBIR) has been shown to improve reconstruction quality and reduce scan time. However, the computational burden and storage of the system matrix is challenging. In this paper we present a separable representation of the system matrix that can be completely stored in memory and accessed cache-efficiently. This is done by quantizing the voxel position for one of the separable subproblems. A parallelized algorithm, which we refer to as zipline update, is presented that speeds up the computation of the solution by about 50 to 100 times on 20 cores by updating groups of voxels together. The quality of the reconstruction and algorithmic scalability are demonstrated on real cone-beam CT data from an NDE application. We show that the reconstruction can be done from a sparse set of projection views while reducing artifacts visible in the conventional filtered back projection (FBP) reconstruction. We present qualitative results using a Markov Random Field (MRF) prior and a Plug-and-Play denoiser.</p>
30

[pt] ESTUDO E IMPLEMENTAÇÃO DE UMA CÂMERA DE PIXEL ÚNICO POR MEIO DE SENSORIAMENTO COMPRESSIVO / [en] STUDY AND IMPLEMENTATION OF A SINGLE PIXEL CAMERA BY COMPRESSIVE SAMPLING

MATHEUS ESTEVES FERREIRA 15 June 2021 (has links)
[pt] Câmeras de pixel único consistem em reconstruir computacionalmente imagens em duas dimensões a partir de um conjunto de medidas feitas por um detector de único pixel. Para que se obtenha a informação espacial, um conjunto de padrões de modulação são aplicados à luz transmitida/refletida do objeto e essa informação é combinada com o sinal integral do detector. Primeiro, apresentamos uma visão geral desses sistemas e demonstramos a implementação de uma prova de conceito capaz de fazer aquisição de imagem usando três modos de operação: Varredura, escaneamento por base de Hadamard, e escaneamento por base de Hadamard com sensoriamento compreensivo. Segundo, discutimos como os diferentes parâmetros experimentais do sistema ótico afetam a aquisição. Finalmente, comparamos a performance dos três modos de operação quando usados para a aquisição de images com tamanhos entre (8px, 8px) e (128px, 128px). / [en] Single-pixel imaging consists in computationally reconstructing 2-dimensional images from a set of intensity measurements taken by a singlepoint detector. To derive the spatial information of a scene, a set of modulation patterns are applied to the transmitted/backscattered light from the object and combined with the integral signal on the detector. First, we present an overview of such optical systems and implement a proof of concept that can perform image acquisition using three different modes of operation: Raster scanning, Hadamard basis scanning, and Hadamard compressive sampling. Second, we explore how the different experimental parameters affect image acquisition. Finally, we compare how the three scanning mode perform for acquisition of images of sizes ranging from (8px, 8px) to (128px, 128px).

Page generated in 0.0629 seconds