Many everyday human skills can be framed in terms of performing some task subject to constraints imposed by the task or the environment. Constraints are usually unobservable and frequently change between contexts. In this thesis, we explore the problem of learning control policies from data containing variable, dynamic and non-linear constraints on motion. We show that an effective approach for doing this is to learn the unconstrained policy in a way that is consistent with the constraints. We propose several novel algorithms for extracting these policies from movement data, where observations are recorded under different constraints. Furthermore, we show that, by doing so, we are able to learn representations of movement that generalise over constraints and can predict behaviour under new constraints. In our experiments, we test the algorithms on systems of varying size and complexity, and show that the novel approaches give significant improvements in performance compared with standard policy learning approaches that are naive to the effect of constraints. Finally, we illustrate the utility of the approaches for learning from human motion capture data and transferring behaviour to several robotic platforms.
Identifer | oai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:562531 |
Date | January 2009 |
Creators | Howard, Matthew |
Contributors | Vijayakumar, Sethu |
Publisher | University of Edinburgh |
Source Sets | Ethos UK |
Detected Language | English |
Type | Electronic Thesis or Dissertation |
Source | http://hdl.handle.net/1842/3972 |
Page generated in 0.0019 seconds