• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 11
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Three-Dimensional Fluid Flow Measurement Techniques with Applications to Biological Flows

La Foy, Roderick Robert 16 September 2022 (has links)
The accuracy of plenoptic and tomographic particle image velocimetry (PIV) experimental methods is measured by simulating three-dimensional flows and measuring the errors in the estimated versus true velocity fields. Parametric studies investigate the accuracy of these methods by simulating a range of camera numbers, camera angles, calibration errors, and particle densities. The plenoptic simulations combine lightfield imaging techniques with standard tomographic techniques and are shown to produce higher fidelity measurements than either technique alone. The tomographic PIV simulations are centered around testing software developed for processing large quantities of data that were produced during an experimental investigation of the flow field about a 3D printed model of the flying snake Chrysopelea paradisi. A description of this tomographic PIV experiment is given along with basic results and recommendations for future investigation. / Doctor of Philosophy / Two different experimental measurement techniques that can be used to measure three-dimensional fluid flow fields are discussed. The first measurement technique that is investigated in simulations uses cameras with arrays of lenses to simultaneously capture images of a flow field from multiple different angles. A method of combining the data from multiple cameras is discussed and shown to yield more accurate estimates of the three-dimensional flow fields than from a single camera alone. An additional measurement technique that uses a group of standard cameras to measure three-dimensional flow fields is also discussed with respect to software that was developed for processing a large volume dataset. This software was developed for processing data collected during an experimental investigation of the flow field about a 3D printed model of the flying snake Chrysopelea paradisi. A description of this experiment is given along with basic results and recommendations for future investigation.
2

Investigating the lateral resolution in a plenoptic capturing system using the SPC model

Damghanian, Mitra, Olsson, Roger, Sjöström, Mårten, Navarro Fructuoso, Hector, Martinez Corral, Manuel January 2013 (has links)
Complex multidimensional capturing setups such as plenoptic cameras (PC) introduce a trade-off between various system properties. Consequently, established capturing properties, like image resolution, need to be described thoroughly for these systems. Therefore models and metrics that assist exploring and formulating this trade-off are highly beneficial for studying as well as designing of complex capturing systems. This work demonstrates the capability of our previously proposed sampling pattern cube (SPC) model to extract the lateral resolution for plenoptic capturing systems. The SPC carries both ray information as well as focal properties of the capturing system it models. The proposed operator extracts the lateral resolution from the SPC model throughout an arbitrary number of depth planes giving a depth-resolution profile. This operator utilizes focal properties of the capturing system as well as the geometrical distribution of the light containers which are the elements in the SPC model. We have validated the lateral resolution operator for different capturing setups by comparing the results with those from Monte Carlo numerical simulations based on the wave optics model. The lateral resolution predicted by the SPC model agrees with the results from the more complex wave optics model better than both the ray based model and our previously proposed lateral resolution operator. This agreement strengthens the conclusion that the SPC fills the gap between ray-based models and the real system performance, by including the focal information of the system as a model parameter. The SPC is proven a simple yet efficient model for extracting the lateral resolution as a high-level property of complex plenoptic capturing systems.
3

Defining Ray Sets for the Analysis of Lenslet-Based Optical Systems Including Plenoptic Cameras and Shack-Hartmann Wavefront Sensors

Moore, Lori Briggs January 2014 (has links)
Plenoptic cameras and Shack-Hartmann wavefront sensors are lenslet-based optical systems that do not form a conventional image. The addition of a lens array into these systems allows for the aberrations generated by the combination of the object and the optical components located prior to the lens array to be measured or corrected with post-processing. This dissertation provides a ray selection method to determine the rays that pass through each lenslet in a lenslet-based system. This first-order, ray trace method is developed for any lenslet-based system with a well-defined fore optic, where in this dissertation the fore optic is all of the optical components located prior to the lens array. For example, in a plenoptic camera the fore optic is a standard camera lens. Because a lens array at any location after the exit pupil of the fore optic is considered in this analysis, it is applicable to both plenoptic cameras and Shack-Hartmann wavefront sensors. Only a generic, unaberrated fore optic is considered, but this dissertation establishes a framework for considering the effect of an aberrated fore optic in lenslet-based systems. The rays from the fore optic that pass through a lenslet placed at any location after the fore optic are determined. This collection of rays is reduced to three rays that describe the entire lenslet ray set. The lenslet ray set is determined at the object, image, and pupil planes of the fore optic. The consideration of the apertures that define the lenslet ray set for an on-axis lenslet leads to three classes of lenslet-based systems. Vignetting of the lenslet rays is considered for off-axis lenslets. Finally, the lenslet ray set is normalized into terms similar to the field and aperture vector used to describe the aberrated wavefront of the fore optic. The analysis in this dissertation is complementary to other first-order models that have been developed for a specific plenoptic camera layout or Shack-Hartmann wavefront sensor application. This general analysis determines the location where the rays of each lenslet pass through the fore optic establishing a framework to consider the effect of an aberrated fore optic in a future analysis.
4

The standard plenoptic camera : applications of a geometrical light field model

Hahne, Christopher January 2016 (has links)
The plenoptic camera is an emerging technology in computer vision able to capture a light field image from a single exposure which allows a computational change of the perspective view just as the optical focus, known as refocusing. Until now there was no general method to pinpoint object planes that have been brought to focus or stereo baselines of perspective views posed by a plenoptic camera. Previous research has presented simplified ray models to prove the concept of refocusing and to enhance image and depth map qualities, but lacked promising distance estimates and an efficient refocusing hardware implementation. In this thesis, a pair of light rays is treated as a system of linear functions whose solution yields ray intersections indicating distances to refocused object planes or positions of virtual cameras that project perspective views. A refocusing image synthesis is derived from the proposed ray model and further developed to an array of switch-controlled semi-systolic FIR convolution filters. Their real-time performance is verified through simulation and implementation by means of an FPGA using VHDL programming. A series of experiments is carried out with different lenses and focus settings, where prediction results are compared with those of a real ray simulation tool and processed light field photographs for which a blur metric has been considered. Predictions accurately match measurements in light field photographs and signify deviations of less than 0.35 % in real ray simulation. A benchmark assessment of the proposed refocusing hardware implementation suggests a computation time speed-up of 99.91 % in comparison with a state-of-the-art technique. It is expected that this research supports in the prototyping stage of plenoptic cameras and microscopes as it helps specifying depth sampling planes, thus localising objects and provides a power-efficient refocusing hardware design for full-video applications as in broadcasting or motion picture arts.
5

Computational and Design Methods for Advanced Imaging

Birch, Gabriel C. January 2012 (has links)
This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system ray traces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.
6

Absolute depth using low-cost light field cameras

Rangappa, Shreedhar January 2018 (has links)
Digital cameras are increasingly used for measurement tasks within engineering scenarios, often being part of metrology platforms. Existing cameras are well equipped to provide 2D information about the fields of view (FOV) they observe, the objects within the FOV, and the accompanying environments. But for some applications these 2D results are not sufficient, specifically applications that require Z dimensional data (depth data) along with the X and Y dimensional data. New designs of camera systems have previously been developed by integrating multiple cameras to provide 3D data, ranging from 2 camera photogrammetry to multiple camera stereo systems. Many earlier attempts to record 3D data on 2D sensors have been completed, and likewise many research groups around the world are currently working on camera technology but from different perspectives; computer vision, algorithm development, metrology, etc. Plenoptic or Lightfield camera technology was defined as a technique over 100 years ago but has remained dormant as a potential metrology instrument. Lightfield cameras utilize an additional Micro Lens Array (MLA) in front of the imaging sensor, to create multiple viewpoints of the same scene and allow encoding of depth information. A small number of companies have explored the potential of lightfield cameras, but in the majority, these have been aimed at domestic consumer photography, only ever recording scenes as relative scale greyscale images. This research considers the potential for lightfield cameras to be used for world scene metrology applications, specifically to record absolute coordinate data. Specific interest has been paid to a range of low cost lightfield cameras to; understand the functional/behavioural characteristics of the optics, identify potential need for optical and/or algorithm development, define sensitivity, repeatability and accuracy characteristics and limiting thresholds of use, and allow quantified 3D absolute scale coordinate data to be extracted from the images. The novel output of this work is; an analysis of lightfield camera system sensitivity leading to the definition of Active Zones (linear data generation good data) and In-active Zones (non-linear data generation poor data), development of bespoke calibration algorithms that remove radial/tangential distortion from the data captured using any MLA based camera, and, a light field camera independent algorithm that allows the delivery of 3D coordinate data in absolute units within a well-defined measurable range from a given camera.
7

Conception et réalisation de caméras plénoptiques pour l'apport d'une vision 3D à un imageur infrarouge mono plan focal / Design and implementation of cooled infrared cameras with single focal plane array depth estimation capability

Cossu, Kevin 23 November 2018 (has links)
Les systèmes d’imagerie infrarouge suivent depuis plusieurs années la même tendance de miniaturisation que les systèmes d’imagerie visible. Aujourd’hui cette miniaturisation se rapproche d’une limite physique qui amène la communauté à se tourner vers une nouvelle approche : la fonctionnalisation, c’est-à-dire l’apport de fonctions d’imagerie avancées aux systèmes telles que l’imagerie 3D.En infrarouge, la fonction d’imagerie 3D est très recherchée car elle pourrait apporter à un fantassin un outil de télémétrie passive fonctionnant de nuit comme de jour, ou encore permettre l’autonomie en environnements complexes à des systèmes tels que les drones. Cependant, le cout d’une caméra infrarouge hautes-performances est élevé. Multiplier le nombre de cameras n’est donc pas une solution acceptable pour répondre à ce besoin.C’est dans ce contexte que se situe ce travail qui consiste à apporter une fonction de vision 3D à des caméras infrarouges possédant un unique plan focal.Au cours de cette thèse, j’ai identifié la technologie d’imagerie 3D la plus adaptée à ce besoin : la camera plénoptique. J’ai montré que cette dernière permet de proposer, en intégrant une matrice de microlentilles dans le cryostat, un bloc de détection infrarouge avec une fonction d’imagerie 3D. L’environnement scellé du bloc de détection m’a amené à développer un modèle de dimensionnement rigoureux que j’ai appliqué pour concevoir et réaliser une camera plénoptique infrarouge refroidie. J’ai ensuite mis au point une méthode de caractérisation originale et intégré les mesures dans une série d’algorithmes de traitement d’image afin de remonter à la distance des objets observés. / For a few years now, infrared cameras have been following the same miniaturization trend introduced with visible cameras. Today, this miniaturization is nearing a physical limit, leading the community to take a different approach called functionalization: that is bringing an advanced imaging capability to the system.For infrared cameras, one of the most desired functions is 3D vision. This could be used to bring soldiers a passive telemetry tool or to help UAVs navigate a complex environment, even at night. However, high performance infrared cameras are expensive. Multiplying the number of cameras would thus not be an acceptable solution to bring 3D vision to these systems.That is why this work focuses on bringing 3D vision to cooled infrared cameras using only a single focal plane array.During this PhD, I have first identified the plenoptic technology as the most suitable for our need of 3D vision with a single cooled infrared sensor. I have shown that integrating a microlens array inside the dewar could bring this function to the infrared region. I have then developed a complete design model for such a camera and used it to design and build a cooled infrared plenoptic camera. I have created a method to characterize our camera and integrated this method into the image processing algorithms necessary to generate refocused images and derive the distance of objects in the scene.
8

Imagerie plénoptique à travers des milieux complexes par synthèse d'ouverture optique / Plenoptic imaging through complex media using synthetic aperture imaging

Glastre, Wilfried 25 September 2013 (has links)
Nous présentons un nouveau type d'imageur plénoptique appelé LOFI (Laser Optical Feedback Imaging). Le grand avantage de cette technique est qu'elle est auto-alignée, car le laser sert à la fois de source et de détecteur de photons. De plus, grâce à un effet d'amplification intra-cavité produit par la dynamique du laser, et grâce à un marquage acoustique des photons réinjectés, ce dispositif possède une sensibilité ultime au photon unique. Cette sensibilité est nécessaire si l'on veut réaliser des images à travers des milieux diffusants. L'autre intérêt présenté par le caractère plénoptique de notre imageur, est qu'il permet d'obtenir simultanément une double information: la position et la direction de propagation des rayons lumineux. Cette propriété offre des possibilités inhabituelles, comme celle de conserver la résolution d'un objectif de microscope bien au-delà de sa distance de travail, ou encore de pouvoir corriger par un post-traitement numérique les aberrations causées par la traversée d'un milieu hétérogène. Le dispositif LOFI plénoptique semble donc idéal pour une imagerie en profondeur à travers des milieux complexes, tels que les milieux biologiques. Les performances très intéressantes de cette imageur sont cependant obtenues au prix d'un filtrage spatial très coûteux en photons et au prix d'une acquisition des images réalisées point par point, donc relativement lente. / We present LOFI (Laser Optical Feedback Imaging). The main advantage of this technique is that it is auto-aligned, as the laser plays both the role of an emitter and a receiver of photons. Furthermore, thanks to an intra-cavity amplification effect caused by the laser dynamics and an acoustic tagging of re-injected photons, this setup reaches a shot noise sensitivity (single photon sensitive). This sensitivity is necessary if our aim is to make images through scattering media. The other interest, which comes from the plenoptic property of our setup, is that one have access to a complete information about light rays (position and direction of propagation). This property implies unusual possibilities like keeping a constant resolution beyond microscope objectives working distance or being able to numerically compensate, after acquisition, aberrations caused by the propagation through heterogeneous media. Our setup is thus ideal for deep imaging through complex media (turbid and heterogeneous) like biological ones. These interesting properties are achieved at the price of a spatial filtering degrading photon collection efficiency and of a point by point image acquisition which is slow.
9

Low-Complexity Multi-Dimensional Filters for Plenoptic Signal Processing

Edussooriya, Chamira Udaya Shantha 02 December 2015 (has links)
Five-dimensional (5-D) light field video (LFV) (also known as plenoptic video) is a more powerful form of representing information of dynamic scenes compared to conventional three-dimensional (3-D) video. In this dissertation, the spectra of moving objects in LFVs are analyzed, and it is shown that such moving objects can be enhanced based on their depth and velocity by employing 5-D digital filters, what is defined as depth-velocity filters. In particular, the spectral region of support (ROS) of a Lambertian object moving with constant velocity and at constant depth is shown to be a skewed 3-D hyperfan in the 5-D frequency domain. Furthermore, it is shown that the spectral ROS of a Lambertian object moving at non-constant depth can be approximated as a sequence of ROSs, each of which is a skewed 3-D hyperfan, in the 5-D continuous frequency domain. Based on the spectral analysis, a novel 5-D finite-extent impulse response (FIR) depth-velocity filter and a novel ultra-low complexity 5-D infinite-extent impulse response (IIR) depth-velocity filter are proposed for enhancing objects moving with constant velocity and at constant depth in LFVs. Furthermore, a novel ultra-low complexity 5-D IIR adaptive depth-velocity filter is proposed for enhancing objects moving at non-constant depth in LFVs. Also, an ultra-low complexity 3-D linear-phase IIR velocity filter that can be incorporated to design 5-D IIR depth-velocity filters is proposed. To the best of the author’s knowledge, the proposed 5-D FIR and IIR depth-velocity filters and the proposed 5-D IIR adaptive depth-velocity filter are the first such 5-D filters applied for enhancing moving objects in LFVs based on their depth and velocity. Numerically generated LFVs and LFVs of real scenes, generated by means of a commercially available Lytro light field (LF) camera, are used to test the effectiveness of the proposed 5-D depth-velocity filters. Numerical simulation results indicate that the proposed 5-D depth-velocity filters outperform the 3-D velocity filters and the four-dimensional (4-D) depth filters in enhancing moving objects in LFVs. More importantly, the proposed 5-D depth-velocity filters are capable of exposing heavily occluded parts of a scene and of attenuating noise significantly. Considering the ultra-low complexity, the proposed 5-D IIR depth-velocity filter and the proposed 5-D IIR adaptive depth-velocity filter have significant potentials to be employed in real-time applications. / Graduate / 0544
10

Fotografování s využitím světelného pole / Light field photography

Svoboda, Karel January 2016 (has links)
The aim of this thesis is to explain terms like light field, plenoptic camera or digital lens. Also the goal is to explain the principle of rendering the resulting images with the option to select the plane of focus, depth of field, changes in perspective and partial change in the angle of the point of view. The main outputs of this thesis are scripts for rendering images from Lytro camera and the interactive application, which clearly demonstrates the principles of plenoptic sensing.

Page generated in 0.2277 seconds