<p> Traditionally, models of a robot's kinematics and sensors have been provided by designers through manual processes. Such models are used for sensorimotor tasks, such as manipulation and stereo vision. However, these techniques often yield static models based on one-time calibrations or ideal engineering drawings; models that often fail to represent the actual hardware, or in which individual unimodal models, such as those describing kinematics and vision, may disagree with each other.</p><p> Humans, on the other hand, are not so limited. One of the earliest forms of self-knowledge learned during infancy is knowledge of the body and senses. Infants learn about their bodies and senses through the experience of using them in conjunction with each other. Inspired by this early form of self-awareness, the research presented in this thesis attempts to enable robots to learn unified models of themselves through data sampled during operation. In the presented experiments, an upper torso humanoid robot, Nico, creates a highly-accurate self-representation through data sampled by its sensors while it operates. The power of this model is demonstrated through a novel robot vision task in which the robot infers the visual perspective representing reflections in a mirror by watching its own motion reflected therein.</p><p> In order to construct this self-model, the robot first infers the kinematic parameters describing its arm. This is first demonstrated using an external motion capture system, then implemented in the robot's stereo vision system. In a process inspired by infant development, the robot then mutually refines its kinematic and stereo vision calibrations, using its kinematic structure as the invariant against which the system is calibrated. The product of this procedure is a very precise mutual calibration between these two, traditionally separate, models, producing a single, unified self-model.</p><p> The robot then uses this self-model to perform a unique vision task. Knowledge of its body and senses enable the robot to infer the position of a mirror placed in its environment. From this, an estimate of the visual perspective describing reflections in the mirror is computed, which is subsequently refined over the expected position of images of the robot's end-effector as reflected in the mirror, and their real-world, imaged counterparts. The computed visual perspective enables the robot to use the mirror as an instrument for spacial reasoning, by viewing the world from its perspective. This test utilizes knowledge that the robot has inferred about itself through experience, and approximates tests of mirror use that are used as a benchmark of self-awareness in human infants and animals.</p>
Identifer | oai:union.ndltd.org:PROQUEST/oai:pqdtoai.proquest.com:3582284 |
Date | 07 March 2015 |
Creators | Hart, Justin Wildrick |
Publisher | Yale University |
Source Sets | ProQuest.com |
Language | English |
Detected Language | English |
Type | thesis |
Page generated in 0.0019 seconds