Return to search

Model dependent inference of three-dimensional information from a sequence of two-dimensional images

In order to autonomously navigate through a complex environment, a mobile robot requires sensory feedback. This feedback will typically include the 3D motion and location of the robot and the 3D structure and motion of obstacles and other environmental features. The general problem considered in this thesis is how this 3D information may be obtained from a sequence of images generated by a camera mounted on a mobile robot. The first set of algorithms developed in this thesis are for robust determination of the 3D pose of the mobile robot from a matched set of model and image landmark features. Least-squares techniques for point and line tokens, which minimize both rotation and translation simultaneously are developed and shown to be far superior to the earlier techniques which solved for rotation first and then translation. However, least-squares techniques fail catastrophically when outliers (or gross errors) are present in the match data. Outliers arise frequently due to incorrect correspondences or gross errors in the 3D model. Robust techniques for pose determination are developed to handle data contaminated by fewer than 50.0% outliers. To make the model based approach widely applicable, it is necessary to be able to automatically build the landmark models. The approach adopted in this thesis is one of model extension and refinement. A partial model of the environment is assumed to exist and this model is extended over a sequence of frames. As will be shown in the experiments, the prior knowledge of the small partial model greatly enhances the robustness of the 3D structure computations. The initial 3D model may have errors and these too are refined over the sequence of frames. Finally, the sensitivity of pose determination and model extension to incorrect estimates of camera parameters is analyzed. It is shown that for small field of view systems, offsets in the image center do not significantly affect the location of the camera and the location of new 3D points in a world coordinate system. Errors in the focal length significantly affect only the component of translation along the optical axis in the pose computation.

Identiferoai:union.ndltd.org:UMASS/oai:scholarworks.umass.edu:dissertations-2665
Date01 January 1992
CreatorsKumar, Rakesh
PublisherScholarWorks@UMass Amherst
Source SetsUniversity of Massachusetts, Amherst
LanguageEnglish
Detected LanguageEnglish
Typetext
SourceDoctoral Dissertations Available from Proquest

Page generated in 0.0018 seconds