Return to search

Developmental learning of preconditions for means-end actions from 3D vision

Specifically equipped and programmed robots are highly successful in controlled industrial environments such as automated production lines. For the transition of robots from such controlled uniform environments to unconstrained household environments with a large range of conditions and variations, a new paradigm is needed to prepare the robots for deployment. Robots need to be able to quickly adapt to their changing environments and learn on their own how to solve their tasks in novel situations. This dissertation focusses on the aspect of learning to predict the success of two-object means-end actions in a developmental way. E.g. the action of bringing one object into reach by pulling another, where the one object is on top of the other. Here it is the “on top” relation that affects the success of the action. Learning the preconditions for complex means-end actions via supervised learning can take several thousand training samples, which is impractical to generate, hence more rapid learning capabilities are necessary. Three contributions of this dissertation are used to solve the learning problem. 1. Inspired by infant psychology this dissertation investigates an approach to intrinsic motivation that is based on active learning, guiding the robot's exploration to create experience useful for improving classification performance. 2. This dissertation introduces histogram based 3D vision features that encode the relative spatial relations between surface points of object pairs, allowing a robot to reliably recognise the important spatial categories that affect means-end action outcomes. 3. Intrinsically encoded experience is extracted into symbolic category knowledge, encoding higher level abstract categories. These symbolic categories are used for knowledge transfer by using them to extend the state space of action precondition learning classifiers. Depending on the actions and their preconditions, the contributions of this dissertation enable a robot to achieve success prediction accuracies above 85% with ten training samples instead of approximately 1000 training samples that would otherwise be required. These results can be achieved when (a) the action preconditions can be easily identified from the used vision features or (b) the action preconditions to be learnt rest upon already existing knowledge, then it is possible to achieve these results by reusing the existing knowledge. This dissertation demonstrates, in simulation, an alternative to handcoding the knowledge required for a robot to interact with and manipulate objects in the environment. It shows that rapid learning, grounded in autonomous exploration, can be feasible if the necessary vision features are constructed and if existing knowledge is consistently reused.

Identiferoai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:675564
Date January 2015
CreatorsFichtl, Severin Andreas Thomas-Morus
PublisherUniversity of Aberdeen
Source SetsEthos UK
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation
Sourcehttp://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=227931

Page generated in 0.0106 seconds