Return to search

Learning Object Properties From Manipulation for Manipulation

The world contains objects with various properties - rigid, granular, liquid, elastic or plastic. As humans, while interacting with the objects, we plan our manipulation by considering their properties. For instance, while holding a rigid object such as a brick, we adapt our grasp based on its centre of mass not to drop it. On the other hand while manipulating a deformable object, we may consider additional properties to the centre of mass such elasticity, brittleness etc. for grasp stability. Therefore, knowing object properties is an integral part of skilled manipulation of objects.  For manipulating objects skillfully, robots should be able to predict the object properties as humans do. To predict the properties, interactions with objects are essential. These interactions give rise distinct sensory signals that contains information about the object properties. The signals coming from a single sensory modality may give ambiguous information or noisy measurements. Hence, by integrating multi-sensory modalities (vision, touch, audio or proprioceptive), a manipulated object can be observed from different aspects and this can decrease the uncertainty in the observed properties. By analyzing the perceived sensory signals, a robot reasons about the object properties and adjusts its manipulation based on this information. During this adjustment, the robot can make use of a simulation model to predict the object behavior to plan the next action. For instance, if an object is assumed to be rigid before interaction and exhibit deformable behavior after interaction, an internal simulation model can be used to predict the load force exerted on the object, so that appropriate manipulation can be planned in the next action. Thus, learning about object properties can be defined as an active procedure. The robot explores the object properties actively and purposefully by interacting with the object, and adjusting its manipulation based on the sensory information and predicted object behavior through an internal simulation model. This thesis investigates the necessary mechanisms that we mentioned above to learn object properties: (i) multi-sensory information, (ii) simulation and (iii) active exploration. In particular, we investigate these three mechanisms that represent different and complementary ways of extracting a certain object property, the deformability of objects. Firstly, we investigate the feasibility of using visual and/or tactile data to classify the content of a container based on the deformation observed when a robotic hand squeezes and deforms the container. According to our result, both visual and tactile sensory data individually give high accuracy rates while classifying the content type based on the deformation. Next, we investigate the usage of a simulation model to estimate the object deformability that is revealed through a manipulation. The proposed method identify accurately the deformability of the test objects in synthetic and real-world data. Finally, we investigate the integration of the deformation simulation in a robotic active perception framework to extract the heterogenous deformability properties of an environment through physical interactions. In the experiments that we apply on real-world objects, we illustrate that the active perception framework can map the heterogeneous deformability properties of a surface. / <p>QC 20170517</p>

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:kth-207154
Date January 2017
CreatorsGüler, Püren
PublisherKTH, Robotik, perception och lärande, RPL, US-AB
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeDoctoral thesis, monograph, info:eu-repo/semantics/doctoralThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess
RelationTRITA-CSC-A, 1653-5723 ; 2017:16, info:eu-repo/grantAgreement/EC/FP7/ICT-288533

Page generated in 0.0019 seconds