Vision-based navigation and control of a robotic vehicle

A recurrent problem in mobile robotics is the difficulty to accurately estimate a robot's localization. The ability to successfully estimate the localization of a mobile robot is highly dependent on the type of sensory data used to infer its pose. Traditionally, this has been achieved with odometry and the integration of wheel encoders signals. A major drawback of this approach, however, is the inability to provide an accurate estimate of the heading orientation; a significant cause of odometry drift leading to navigation failure. Accordingly, there is a need for improved localization methods, and vision-based estimation holds promise for this purpose. This research proposes an alternative solution to pure odometry localization. To that end a visual pose estimation algorithm which combines robotic vision and odometry is proposed. The method utilizes scene image vanishing points for recovering the orientation of a mobile robot in a two-dimensional space. To assess the performance of the visual pose estimation algorithm on an operational prototype robotic vehicle developed in the course of the current research, an original pose tracking controller using the geometrical properties of Cardinal splines is implemented. The visual pose estimation algorithm is validated experimentally and compared against six sensory fusion schemes. The results show that the localization accuracy can be improved by one order of magnitude when compared to pure wheel encoder odometry. With regards to motion control, the pose tracking controller is also evaluated for the case of rectilinear trajectories. Future work on large-scale navigation strategies will be developed based on these ideas.

Identiferoai:union.ndltd.org:uottawa.ca/oai:ruor.uottawa.ca:10393/27361
Date January 2006
CreatorsGagne-Roussel, Dave
PublisherUniversity of Ottawa (Canada)
Source SetsUniversité d’Ottawa
LanguageEnglish
Detected LanguageEnglish
TypeThesis
Format236 p.

Page generated in 0.0023 seconds