Return to search

Robotic manipulation based on visual and tactile perception

We still struggle to deliver autonomous robots that perform manipulation tasks as simple for a human as picking up items. A portion of the difficulty of this task lays on the fact that such operation requires a robot that can deal with uncertainty in an unstructured environment. We propose in this thesis the use of visual and tactile perception for providing solutions that can improve the robustness of a robotic manipulator in such environment. In this thesis, we approach robotic grasping using a single 3D point cloud with a partial view of the objects present in the scene. Moreover, the objects are unknown: they have not been previously recognised and we do not have a 3D model to compute candidate grasping points. In experimentation, we prove that our solution is fast and robust, taking in average 17 ms to find a grasp which is stable 85% of the time. Tactile sensors provide a rich source of information regarding the contact experienced by a robotic hand during the manipulation of an object. In this thesis, we exploit with deep learning this type of data for approaching the prediction of the stability of a grasp and the detection of the direction of slip of a contacted object. We prove that our solutions could correctly predict stability 76% of the time with a single tactile reading. We also demonstrate that learning temporal and spatial patterns leads to detections of the direction of slip which are correct up to 82% of the time and are only delayed 50 ms after the actual slip event begins. Despite the good results achieved on the previous two tactile tasks, this data modality has a serious flaw: it can only be registered during contact. In contrast, humans can estimate the feeling of grasping an object just by looking at it. Inspired by this, we present in this thesis our contributions for learning to generate tactile responses from vision. We propose a supervised solution based on training a deep neural network that models the behaviour of a tactile sensor, given 3D visual information of the target object and grasp data as an input. As a result, our system has to learn to link vision to touch. We prove in experimentation that our system learns to generate tactile responses on a set of 12 items, being off by only 0.06 relative error points. Furthermore, we also experiment with a semi-supervised solution for learning this task with a reduced need of labelled data. In experimentation, we show that it learns our tactile data generation task with 50% less data than the supervised solution, incrementing only 17% the error. Last, we introduce our work in the generation of candidate grasps which are improved through simulation of the tactile responses they would generate. This work unifies the contributions presented in this thesis, as it applies modules on calculating grasps, stability prediction and tactile data generation. In early experimentation, it finds grasps which are more stable than the original ones produced by our method based on 3D point clouds. / This doctoral thesis has been carried out with the support of the Spanish Ministry of Economy, Industry and Competitiveness through the grant BES-2016-078290.

Identiferoai:union.ndltd.org:ua.es/oai:rua.ua.es:10045/118217
Date17 September 2020
CreatorsZapata-Impata, Brayan S.
ContributorsGil, Pablo, Universidad de Alicante. Instituto Universitario de Investigación Informática
PublisherUniversidad de Alicante
Source SetsUniversidad de Alicante
LanguageEnglish
Detected LanguageEnglish
Typeinfo:eu-repo/semantics/doctoralThesis
RightsLicencia Creative Commons Reconocimiento-NoComercial-SinObraDerivada 4.0, info:eu-repo/semantics/openAccess

Page generated in 0.0025 seconds