The roles of allocentric representations in autonomous local navigation

In this thesis, I study the computational advantages of the allocentric represen- tation as compared to the egocentric representation for autonomous local navigation. Whereas in the allocentric framework, all variables of interest are represented with respect to a coordinate frame attached to an object in the scene, in the egocentric one, they are always represented with respect to the robot frame at each time step.
In contrast with well-known results in the Simultaneous Localization and Mapping literature, I show that the amounts of nonlinearity of these two representations, where poses are elements of Lie-group manifolds, do not affect the accuracy of Gaussian- based filtering methods for perception at both the feature level and the object level. Furthermore, although these two representations are equivalent at the object level, the allocentric filtering framework is better than the egocentric one at the feature level due to its advantages in the marginalization process. Moreover, I show that the object- centric perspective, inspired by the allocentric representation, enables novel linear- time filtering algorithms, which significantly outperform state-of-the-art feature-based filtering methods with a small trade-off in accuracy due to a low-rank approximation. Finally, I show that the allocentric representation is also better than the egocentric representation in Model Predictive Control for local trajectory planning and obstacle avoidance tasks.

Identiferoai:union.ndltd.org:GATECH/oai:smartech.gatech.edu:1853/53489
Date08 June 2015
CreatorsTa Huynh, Duy Nguyen
ContributorsDellaert, Frank
PublisherGeorgia Institute of Technology
Source SetsGeorgia Tech Electronic Thesis and Dissertation Archive
Languageen_US
Detected LanguageEnglish
TypeDissertation
Formatapplication/pdf

Page generated in 0.0014 seconds