Monocular vision robots use a single camera to process information about its environment. By analyzing this scene, the robot can determine the best navigation direction. Many modern approaches to robot hallway navigation involve using a plethora of sensors to detect certain features in the environment. This can be laser range finders, inertial measurement units, motor encoders, and cameras.
By combining all these sensors, there is unused data which could be useful for navigation. To draw back and develop a baseline approach, this thesis explores the reliability and capability of solely using a camera for navigation. The basic navigation structure begins by taking frames from the camera and breaking them down to find the most prominent lines. The location where these lines intersect determine the forward direction to drive the robot. To improve the accuracy of navigation, algorithm improvements and additional features from the camera frames are used. This includes line intersection weighting to reduce noise from extraneous lines, floor segmentation to improve rotational stability, and person detection.
Identifer | oai:union.ndltd.org:CALPOLY/oai:digitalcommons.calpoly.edu:theses-3188 |
Date | 01 June 2018 |
Creators | Ng, Matthew James |
Publisher | DigitalCommons@CalPoly |
Source Sets | California Polytechnic State University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Master's Theses |
Page generated in 0.0014 seconds