Return to search

Towards topological mapping with vision-based simultaneous localization and map building

Although the theory of Simultaneous Localization and Map Building (SLAM) is well developed, there are many challenges to overcome when incorporating vision sensors into SLAM systems. Visual sensors have different properties when compared to range finding sensors and therefore require different considerations. Existing vision-based SLAM algorithms extract point landmarks, which are required for SLAM algorithms such as the Kalman filter. Under this restriction, the types of image features that can be used are limited and the full advantages of vision not realized. This thesis examines the theoretical formulation of the SLAM problem and the characteristics of visual information in the SLAM domain. It also examines different representations of uncertainty, features and environments. It identifies the necessity to develop a suitable framework for vision-based SLAM systems and proposes a framework called VisionSLAM, which utilizes an appearance-based landmark representation and topological map structure to model metric relations between landmarks. A set of Haar feature filters are used to extract image structure statistics, which are robust against illumination changes, have good uniqueness property and can be computed in real time. The algorithm is able to resolve and correct false data associations and is robust against random correlation resulting from perceptual aliasing. The algorithm has been tested extensively in a natural outdoor environment.

Identiferoai:union.ndltd.org:ADTP/258487
Date January 2008
CreatorsLee , Chun-Fan, Computer Science & Engineering, Faculty of Engineering, UNSW
PublisherAwarded by:University of New South Wales. Computer Science & Engineering
Source SetsAustraliasian Digital Theses Program
LanguageEnglish
Detected LanguageEnglish
RightsCopyright Lee Chun-Fan., http://unsworks.unsw.edu.au/copyright

Page generated in 0.0019 seconds