In this thesis, I study the computational advantages of the allocentric represen- tation as compared to the egocentric representation for autonomous local navigation. Whereas in the allocentric framework, all variables of interest are represented with respect to a coordinate frame attached to an object in the scene, in the egocentric one, they are always represented with respect to the robot frame at each time step.
In contrast with well-known results in the Simultaneous Localization and Mapping literature, I show that the amounts of nonlinearity of these two representations, where poses are elements of Lie-group manifolds, do not affect the accuracy of Gaussian- based filtering methods for perception at both the feature level and the object level. Furthermore, although these two representations are equivalent at the object level, the allocentric filtering framework is better than the egocentric one at the feature level due to its advantages in the marginalization process. Moreover, I show that the object- centric perspective, inspired by the allocentric representation, enables novel linear- time filtering algorithms, which significantly outperform state-of-the-art feature-based filtering methods with a small trade-off in accuracy due to a low-rank approximation. Finally, I show that the allocentric representation is also better than the egocentric representation in Model Predictive Control for local trajectory planning and obstacle avoidance tasks.
Identifer | oai:union.ndltd.org:GATECH/oai:smartech.gatech.edu:1853/53489 |
Date | 08 June 2015 |
Creators | Ta Huynh, Duy Nguyen |
Contributors | Dellaert, Frank |
Publisher | Georgia Institute of Technology |
Source Sets | Georgia Tech Electronic Thesis and Dissertation Archive |
Language | en_US |
Detected Language | English |
Type | Dissertation |
Format | application/pdf |
Page generated in 0.0114 seconds