Return to search

Absolute depth using low-cost light field cameras

Digital cameras are increasingly used for measurement tasks within engineering scenarios, often being part of metrology platforms. Existing cameras are well equipped to provide 2D information about the fields of view (FOV) they observe, the objects within the FOV, and the accompanying environments. But for some applications these 2D results are not sufficient, specifically applications that require Z dimensional data (depth data) along with the X and Y dimensional data. New designs of camera systems have previously been developed by integrating multiple cameras to provide 3D data, ranging from 2 camera photogrammetry to multiple camera stereo systems. Many earlier attempts to record 3D data on 2D sensors have been completed, and likewise many research groups around the world are currently working on camera technology but from different perspectives; computer vision, algorithm development, metrology, etc. Plenoptic or Lightfield camera technology was defined as a technique over 100 years ago but has remained dormant as a potential metrology instrument. Lightfield cameras utilize an additional Micro Lens Array (MLA) in front of the imaging sensor, to create multiple viewpoints of the same scene and allow encoding of depth information. A small number of companies have explored the potential of lightfield cameras, but in the majority, these have been aimed at domestic consumer photography, only ever recording scenes as relative scale greyscale images. This research considers the potential for lightfield cameras to be used for world scene metrology applications, specifically to record absolute coordinate data. Specific interest has been paid to a range of low cost lightfield cameras to; understand the functional/behavioural characteristics of the optics, identify potential need for optical and/or algorithm development, define sensitivity, repeatability and accuracy characteristics and limiting thresholds of use, and allow quantified 3D absolute scale coordinate data to be extracted from the images. The novel output of this work is; an analysis of lightfield camera system sensitivity leading to the definition of Active Zones (linear data generation good data) and In-active Zones (non-linear data generation poor data), development of bespoke calibration algorithms that remove radial/tangential distortion from the data captured using any MLA based camera, and, a light field camera independent algorithm that allows the delivery of 3D coordinate data in absolute units within a well-defined measurable range from a given camera.

Identiferoai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:763499
Date January 2018
CreatorsRangappa, Shreedhar
PublisherLoughborough University
Source SetsEthos UK
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation
Sourcehttps://dspace.lboro.ac.uk/2134/36224

Page generated in 0.0023 seconds