Experience forms the basis of learning. It is crucial in the development of human intelligence, and more broadly allows an agent to discover and learn about the world around it. Although experience is fundamental to learning, it is costly and time-consuming to obtain. In order to speed this process up, humans in particular have developed communication abilities so that ideas and knowledge can be shared without requiring first-hand experience.
Consider the same need for knowledge sharing among robots. Based on the recent growth of the field, it is reasonable to assume that in the near future there will be a collection of robots learning to perform tasks and gaining their own experiences in the world. In order to speed this learning up, it would be beneficial for the various robots to share their knowledge with each other. In most cases, however, the communication of knowledge among humans relies on the existence of similar sensory and motor capabilities. Robots, on the other hand, widely vary in perceptual and motor apparatus, ranging from simple light sensors to sophisticated laser and vision sensing.
This dissertation defines the problem of how heterogeneous robots with widely different capabilities can share experiences gained in the world in order to speed up learning. The work focus specifically on differences in sensing and perception, which can be used both for perceptual categorization tasks as well as determining actions based on environmental features. Motivating the problem, experiments first demonstrate that heterogeneity does indeed pose a problem during the transfer of object models from one robot to another. This is true even when using state of the art object recognition algorithms that use SIFT features, designed to be unique and reproducible.
It is then shown that the abstraction of raw sensory data into intermediate categories for multiple object features (such as color, texture, shape, etc.), represented as Gaussian Mixture Models, can alleviate some of these issues and facilitate effective knowledge transfer. Object representation, heterogeneity, and knowledge transfer is framed within Gärdenfors' conceptual spaces, or geometric spaces that utilize similarity measures as the basis of categorization. This representation is used to model object properties (e.g. color or texture) and concepts (object categories and specific objects).
A framework is then proposed to allow heterogeneous robots to build models of their differences with respect to the intermediate representation using joint interaction in the environment. Confusion matrices are used to map property pairs between two heterogeneous robots, and an information-theoretic metric is proposed to model information loss when going from one robot's representation to another. We demonstrate that these metrics allow for cognizant failure, where the robots can ascertain if concepts can or cannot be shared, given their respective capabilities.
After this period of joint interaction, the learned models are used to facilitate communication and knowledge transfer in a manner that is sensitive to the robots' differences. It is shown that heterogeneous robots are able to learn accurate models of their similarities and difference, and to use these models to transfer learned concepts from one robot to another in order to bootstrap the learning of the receiving robot. In addition, several types of communication tasks are used in the experiments. For example, how can a robot communicate a distinguishing property of an object to help another robot differentiate it from its surroundings? Throughout the dissertation, the claims will be validated through both simulation and real-robot experiments.
Identifer | oai:union.ndltd.org:GATECH/oai:smartech.gatech.edu:1853/33941 |
Date | 05 April 2010 |
Creators | Kira, Zsolt |
Publisher | Georgia Institute of Technology |
Source Sets | Georgia Tech Electronic Thesis and Dissertation Archive |
Detected Language | English |
Type | Dissertation |
Page generated in 0.002 seconds