Return to search

Online 3D Reconstruction and Ground Segmentation using Drone based Long Baseline Stereo Vision System

This thesis presents online 3D reconstruction and ground segmentation using unmanned aerial vehicle (UAV) based stereo vision. For this purpose, a long baseline stereo vision system has been designed and built. Application of this system is to work as part of an air and ground based multi-robot autonomous terrain surveying project at Unmanned Systems Lab (USL), Virginia Tech, to act as a first responder robotic system in disaster situations. Areas covered by this thesis are design of long baseline stereo vision system, study of stereo vision raw output, techniques to filter out outliers from raw stereo vision output, a 3D reconstruction method and a study to improve running time by controlling the density of point clouds. Presented work makes use of filtering methods and implementations in Point Cloud Library (PCL) and feature matching on graphics processing unit (GPU) using OpenCV with CUDA. Besides 3D reconstruction, the challenge in the project was speed and several steps and ideas are presented to achieve it. Presented 3D reconstruction algorithm uses feature matching in 2D images, converts keypoints to 3D using disparity images, estimates rigid body transformation between matched 3D keypoints and fits point clouds. To correct and control orientation and localization errors, it fits re-projected UAV positions on GPS recorded UAV positions using iterative closest point (ICP) algorithm as the correction step. A new but computationally intensive process of use of superpixel clustering and plane fitting to increase resolution of disparity images to sub-pixel resolution is also presented. Results section provides accuracy of 3D reconstruction results. The presented process is able to generate application acceptable semi-dense 3D reconstruction and ground segmentation at 8-12 frames per second (fps). In 3D reconstruction of an area of size 25 x 40 m2, with UAV flight altitude of 23 m, average obstacle localization error and average obstacle size/dimension error is found to be of 17 cm and 3 cm, respectively. / MS / This thesis presents near real-time, called online, visual reconstruction in 3-dimensions (3D) using ground facing camera system on an unmanned aerial vehicle. Another result of this thesis is separating ground from obstacles on the ground. To do this the camera system using two cameras, called stereo vision system, with the cameras being positioned comparatively far away from each other at 60 cm was designed as well as an algorithm and software to do the visual 3D reconstruction was developed. Application of this system is to work as part of an air and ground based multi-robot autonomous terrain surveying project at Unmanned Systems Lab, Virginia Tech, to act as a first responder robotic system in disaster situations. Presented work makes use of Point Cloud Library and library functions on graphics processing unit using OpenCV with CUDA, which are popular Computer Vision libraries. Besides 3D reconstruction, the challenge in the project was speed and several steps and ideas are presented to achieve it. Presented 3D reconstruction algorithm is based on feature matching, which is a popular way to mathematically identify unique pixels in an image. Besides using image features in 3D reconstruction, the algorithm also presents a correction step to correct and control orientation and localization errors using iterative closest point algorithm. A new but computationally intensive process to improve resolution of disparity images, which is an output of the developed stereo vision system, from single pixel accuracy to sub-pixel accuracy is also presented. Results section provides accuracy of 3D reconstruction results. The presented process is able to generate application acceptable 3D reconstruction and ground segmentation at 8-12 frames per second. In 3D reconstruction of an area of size 25 x 40 m2 , with UAV flight altitude of 23 m, average obstacle localization error and average obstacle size/dimension error is found to be of 17 cm and 3 cm, respectively.

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/98009
Date16 November 2018
CreatorsKumar, Prashant
ContributorsElectrical and Computer Engineering, Kochersberger, Kevin B., Williams, Ryan K., Huang, Jia-Bin
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
Detected LanguageEnglish
TypeThesis
FormatETD, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/

Page generated in 0.002 seconds