The ability to accurately map and localize relevant objects surrounding a vehicle is an important task for autonomous vehicle systems. Currently, many of the environmental mapping approaches rely on the expensive LiDAR sensor. Researchers have been attempting to transition to cheaper sensors like the camera, but so far, the mapping accuracy of single-camera and dual-camera systems has not matched the accuracy of LiDAR systems. This thesis examines depth estimation algorithms and camera configurations of a triple-camera system to determine if sensor data from an additional perspective will improve the accuracy of camera-based systems. Using a synthetic dataset, the performance of a selection of stereo depth estimation algorithms is compared to the performance of two triple-camera depth estimation algorithms: disparity fusion and cost fusion. The cost fusion algorithm in both a multi-baseline and multi-axis triple-camera configuration outperformed the environmental mapping accuracy of non-CNN algorithms in a two-camera configuration.
Identifer | oai:union.ndltd.org:CALPOLY/oai:digitalcommons.calpoly.edu:theses-3998 |
Date | 01 December 2021 |
Creators | Peter-Contesse, Jared |
Publisher | DigitalCommons@CalPoly |
Source Sets | California Polytechnic State University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Master's Theses |
Page generated in 0.0019 seconds