Visual perception has been a significant subject matter of robotics research for decades but has accelerated in recent years as both technology and community are more prepared to take on new challenges with autonomous systems. In this thesis, a framework for 3D reconstruction using a stereo camera for the purpose of obstacle detection and mapping is presented. In this application, a UAV works collaboratively with a UGV to provide high level information of the environment by using a downward facing stereo camera. The approach uses frame to frame SURF feature matching to detect candidate points within the camera image. These feature points are projected into a sparse cloud of 3D points using stereophotogrammetry for ICP registration to estimate the rigid transformation between frames. The RTK-GPS constrained pose estimate from the UAV is fused with the feature matched estimate to align the reconstruction and eliminate drift. The reconstruction was tested on both simulated and real data. The results indicate that this approach improves frame to frame registration and produces a well aligned reconstruction for a single pass compared to using the raw UAV position estimate alone. However, multi-pass registration errors occur on the order of about 0.6 meters between parallel passes, and approximately 2 degrees of local rotation error when compared to a reconstruction produced with Agisoft Metashape. However, the proposed system performed at an average frame rate of about 1.3 Hz compared to Agisoft at 0.03 Hz. Overall, the system improved obstacle registration and can perform online within existing ROS frameworks. / Master of Science / Visual perception has been a significant subject matter of robotics research for decades but has accelerated in recent years as both technology and community are more prepared to take on new challenges with autonomous systems. In this thesis, a framework for 3D reconstruction using cameras for the purpose of obstacle detection and mapping is presented. In this application, a UAV works collaboratively with a UGV to provide high level information of the environment by using a downward facing stereo camera. The approach uses features extracted from camera images to detect candidate points to be aligned. These feature points are projected into a sparse cloud of 3D points using stereo triangulation techniques. The 3D points are aligned using an iterative solver to estimate the translation and rotation between frames. The RTK (Real Time Kinematic) GPS constrained position and orientation estimate from the UAV is combined with the feature matched estimate to align the reconstruction and eliminate accumulated errors. The reconstruction was tested on both simulated and real data. The results indicate that this approach improves frame to frame registration and produces a well aligned reconstruction for a single pass compared to using the raw UAV position estimate alone. However, multi-pass registration errors occur on the order of about 0.6 meters between parallel passes that overlap, and approximately 2 degrees of local rotation error when compared to a reconstruction produced with the commercial product, Agisoft. However, the proposed system performed at an average frame rate of about 1.3 Hz compared to Agisoft at 0.03 Hz. Overall, the system improved obstacle registration and can perform online within existing Robot Operating System frameworks.
Identifer | oai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/95852 |
Date | 22 November 2019 |
Creators | Donnelly, James Joseph |
Contributors | Mechanical Engineering, Kochersberger, Kevin B., Tokekar, Pratap, Wicks, Alfred L. |
Publisher | Virginia Tech |
Source Sets | Virginia Tech Theses and Dissertation |
Detected Language | English |
Type | Thesis |
Format | ETD, application/pdf |
Rights | In Copyright, http://rightsstatements.org/vocab/InC/1.0/ |
Page generated in 0.002 seconds