Return to search

Vision-based Navigation for Mobile Robots on Ill-structured Roads

Autonomous robots can replace humans to explore hostile areas, such as Mars and
other inhospitable regions. A fundamental task for the autonomous robot is navigation.
Due to the inherent difficulties in understanding natural objects and changing environments,
navigation for unstructured environments, such as natural environments, has largely
unsolved problems. However, navigation for ill-structured environments [1], where roads
do not disappear completely, increases the understanding of these difficulties.
We develop algorithms for robot navigation on ill-structured roads with monocular
vision based on two elements: the appearance information and the geometric information.
The fundamental problem of the appearance information-based navigation is road presentation.
We propose a new type of road description, a vision vector space (V2-Space), which
is a set of local collision-free directions in image space. We report how the V2-Space is
constructed and how the V2-Space can be used to incorporate vehicle kinematic, dynamic,
and time-delay constraints in motion planning. Failures occur due to the limitations of the
appearance information-based navigation, such as a lack of geometric information. We
expand the research to include consideration of geometric information.
We present the vision-based navigation system using the geometric information. To
compute depth with monocular vision, we use images obtained from different camera perspectives
during robot navigation. For any given image pair, the depth error in regions
close to the camera baseline can be excessively large. This degenerated region is named untrusted area, which could lead to collisions. We analyze how the untrusted areas are distributed
on the road plane and predict them accordingly before the robot makes its move.
We propose an algorithm to assist the robot in avoiding the untrusted area by selecting optimal
locations to take frames while navigating. Experiments show that the algorithm can
significantly reduce the depth error and hence reduce the risk of collisions. Although this
approach is developed for monocular vision, it can be applied to multiple cameras to control
the depth error. The concept of an untrusted area can be applied to 3D reconstruction
with a two-view approach.

Identiferoai:union.ndltd.org:tamu.edu/oai:repository.tamu.edu:1969.1/ETD-TAMU-2008-08-39
Date16 January 2010
CreatorsLee, Hyun Nam
ContributorsSong, Dezhen, Kundur, Deepa
Source SetsTexas A and M University
Languageen_US
Detected LanguageEnglish
TypeBook, Thesis, Electronic Dissertation
Formatapplication/pdf

Page generated in 0.0023 seconds