Spelling suggestions: "subject:"robotics grasping""
1 |
Learning to Assess Grasp Stability from Vision, Touch and ProprioceptionBekiroglu, Yasemin January 2012 (has links)
Grasping and manipulation of objects is an integral part of a robot’s physical interaction with the environment. In order to cope with real-world situations, sensor based grasping of objects and grasp stability estimation is an important skill. This thesis addresses the problem of predicting the stability of a grasp from the perceptions available to a robot once fingers close around the object before attempting to lift it. A regrasping step can be triggered if an unstable grasp is identified. The percepts considered consist of object features (visual), gripper configurations (proprioceptive) and tactile imprints (haptic) when fingers contact the object. This thesis studies tactile based stability estimation by applying machine learning methods such as Hidden Markov Models. An approach to integrate visual and tactile feedback is also introduced to further improve the predictions of grasp stability, using Kernel Logistic Regression models. Like humans, robots are expected to grasp and manipulate objects in a goal-oriented manner. In other words, objects should be grasped so to afford subsequent actions: if I am to hammer a nail, the hammer should be grasped so to afford hammering. Most of the work on grasping commonly addresses only the problem of finding a stable grasp without considering the task/action a robot is supposed to fulfill with an object. This thesis also studies grasp stability assessment in a task-oriented way based on a generative approach using probabilistic graphical models, Bayesian Networks. We integrate high-level task information introduced by a teacher in a supervised setting with low-level stability requirements acquired through a robot’s exploration. The graphical model is used to encode probabilistic relationships between tasks and sensory data (visual, tactile and proprioceptive). The generative modeling approach enables inference of appropriate grasping configurations, as well as prediction of grasp stability. Overall, results indicate that the idea of exploiting learning approaches for grasp stability assessment is applicable in realistic scenarios. / <p>QC 20121026</p>
|
2 |
Sensing and Control for Robust Grasping with Simple HardwareJentoft, Leif Patrick 06 June 2014 (has links)
Robots can move, see, and navigate in the real world outside carefully structured factories, but they cannot yet grasp and manipulate objects without human intervention. Two key barriers are the complexity of current approaches, which require complicated hardware or precise perception to function effectively, and the challenge of understanding system performance in a tractable manner given the wide range of factors that impact successful grasping. This thesis presents sensors and simple control algorithms that relax the requirements on robot hardware, and a framework to understand the capabilities and limitations of grasping systems. / Engineering and Applied Sciences
|
3 |
Grasping unknown novel objects from single view using octant analysisChleborad, Aaron A. January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / David A. Gustafson / Octant analysis, when combined with properties of the multivariate central limit theorem and multivariate normal distribution, allows finding a reasonable grasping point on an unknown novel object possible. This thesis’s original contribution is the ability to find progressively improving grasp points in a poor and/or sparse point cloud. It is shown how octant analysis was implemented using common consumer grade electronics to demonstrate the applicability to home
and office robotics. Tests were carried out on three novel objects in multiple poses to determine the algorithm’s consistency and effectiveness at finding a grasp point on those objects. Results from the experiments bolster the idea that the application of octant analysis to the grasping point problem seems promising and deserving of further investigation. Other applications of the technique are also briefly considered.
|
4 |
Robotic manipulation based on visual and tactile perceptionZapata-Impata, Brayan S. 17 September 2020 (has links)
We still struggle to deliver autonomous robots that perform manipulation tasks as simple for a human as picking up items. A portion of the difficulty of this task lays on the fact that such operation requires a robot that can deal with uncertainty in an unstructured environment. We propose in this thesis the use of visual and tactile perception for providing solutions that can improve the robustness of a robotic manipulator in such environment. In this thesis, we approach robotic grasping using a single 3D point cloud with a partial view of the objects present in the scene. Moreover, the objects are unknown: they have not been previously recognised and we do not have a 3D model to compute candidate grasping points. In experimentation, we prove that our solution is fast and robust, taking in average 17 ms to find a grasp which is stable 85% of the time. Tactile sensors provide a rich source of information regarding the contact experienced by a robotic hand during the manipulation of an object. In this thesis, we exploit with deep learning this type of data for approaching the prediction of the stability of a grasp and the detection of the direction of slip of a contacted object. We prove that our solutions could correctly predict stability 76% of the time with a single tactile reading. We also demonstrate that learning temporal and spatial patterns leads to detections of the direction of slip which are correct up to 82% of the time and are only delayed 50 ms after the actual slip event begins. Despite the good results achieved on the previous two tactile tasks, this data modality has a serious flaw: it can only be registered during contact. In contrast, humans can estimate the feeling of grasping an object just by looking at it. Inspired by this, we present in this thesis our contributions for learning to generate tactile responses from vision. We propose a supervised solution based on training a deep neural network that models the behaviour of a tactile sensor, given 3D visual information of the target object and grasp data as an input. As a result, our system has to learn to link vision to touch. We prove in experimentation that our system learns to generate tactile responses on a set of 12 items, being off by only 0.06 relative error points. Furthermore, we also experiment with a semi-supervised solution for learning this task with a reduced need of labelled data. In experimentation, we show that it learns our tactile data generation task with 50% less data than the supervised solution, incrementing only 17% the error. Last, we introduce our work in the generation of candidate grasps which are improved through simulation of the tactile responses they would generate. This work unifies the contributions presented in this thesis, as it applies modules on calculating grasps, stability prediction and tactile data generation. In early experimentation, it finds grasps which are more stable than the original ones produced by our method based on 3D point clouds. / This doctoral thesis has been carried out with the support of the Spanish Ministry of Economy, Industry and Competitiveness through the grant BES-2016-078290.
|
5 |
Improved manipulator configurations for grasping and task completion based on manipulabilityWilliams, Joshua Murry 16 February 2011 (has links)
When a robotic system executes a task, there are a number of responsibilities that belong to either the operator and/or the robot. A more autonomous system has more responsibilities in the completion of a task and must possess the decision making skills necessary to adequately deal with these responsibilities. The system must also handle environmental constraints that limit the region of operability and complicate the execution of tasks. There are decisions about the robot’s internal configuration and how the manipulator should move through space, avoid obstacles, and grasp objects. These motions usually have limits and performance requirements associated with them.
Successful completion of tasks in a given environment is aided by knowledge of the robot’s capabilities in its workspace. This not only indicates if a task is possible but can suggest how a task should be completed. In this work, we develop a grasping strategy for selecting and attaining grasp configurations for flexible tasks in environments containing obstacles. This is done by sampling for valid grasping configurations at locations throughout the workspace to generate a task plane. Locations in the task plane that contain more valid configurations are stipulated to have higher dexterity and thus provide greater manipulability of targets. For valid configurations found in the plane, we develop a strategy for selecting which configurations to choose when grasping and/or placing an object at a given location in the workspace.
These workspace task planes can also be utilized as a design tool to configure the system around the manipulator’s capabilities. We determine the quality of manipulator positioning in the workspace based on manipulability and locate the best location of targets for manipulation. The knowledge of valid manipulator configurations throughout the workspace can be used to extend the application of task planes to motion planning between grasping configurations. This guides the end-effector through more dexterous workspace regions and to configurations that move the arm away from obstacles.
The task plane technique employed here accurately captures a manipulator’s capabilities. Initial tests for exploiting these capabilities for system design and operation were successful, thus demonstrating this method as a viable starting point for incrementally increasing system autonomy. / text
|
6 |
New object grasp synthesis with gripper selection: process developmentLegrand, Tanguy January 2022 (has links)
A fundamental aspect to consider in factories is the transportation of the items at differentsteps in the production process. Conveyor belts do a great to bring items from point A topoint B but to load the item onto a working station it can demands a more precise and,in some cases, delicate approach. Nowadays this part is mostly handled by robotic arms.The issue encountered is that a robot arm extremity, its gripper, cannot directly instinctivelyknow how to grip an object. It is usually up to a technician to configure how andwhere the gripper goes to grip an item.The goal of this thesis is to analyse a problem given by a company which is to find a wayto automate the grasp pose synthesis of a new object with the adapted gripper.This automatized process can be separated into two sub-problems.First, how to choose the adapted gripper for a new object.Second, how to find a grasp pose on the object, with the previously chosen gripper.In the problem given by the company, the computer-aided design (CAD) 3D model of theconcerned object is given. Also, the grasp shall always be done vertically, i.e., the grippercomes vertically to the object and the gripper does not rotate on the x and y axis. Thegripper for a new object is selected between two kinds of grippers: two-finger paralleljawgripper and three-finger parallel-jaw gripper. No dataset of objects is provided.Object grasping is a well researched subject, especially for 2 finger grippers. However,few research is done for the 3 finger grippers grasp pose synthesis, or for gripper comparison,which are key part of the studied problem.To answer the sub-problems mentioned above, machine learning will be used for the gripperselection and a grasp synthesis method will be used for the grasp pose finding. However,due to the lack of gripper comparison in the related work, a new approach needsto be created, which will be inspired by the findings in the literature about grasp posesynthesis in general.This approach will consist of two parts.First, for each gripper and each object combination are generated some grasp poses, eachassociated with a corresponding score. The scores are used to have an idea of the bestgripper for an object, the best score for each gripper indicating how good a grasp couldbe on the object with said gripper.Secondly, the objects with their associated best score for each gripper will be used astraining data for a machine learning algorithm that will assist in the choice of the gripper.This approach leads to two research questions:“How to generate grasps of satisfying quality for an object with a certain gripper?”“Is it possible to determine the best gripper for a new object via machine learning ?”The first question is answered by using mathematical operations on the point cloud representationof the objects, and a cost function (that will be used to attribute a score), whileithe second question is answered using machine learning classification and regression togain insight on how machine learning can learn to associate object proprieties to gripperefficiency.The found results show that the grasp generation with the chosen cost function givesgrasp poses that are similar to the grasp poses a human operator would choose, but themachine learning models seem unable to assess grasp quality, either with regression orclassification.
|
7 |
Learning to Grasp Unknown Objects using Weighted Random Forest Algorithm from Selective Image and Point Cloud FeatureIqbal, Md Shahriar 01 January 2014 (has links)
This method demonstrates an approach to determine the best grasping location on an unknown object using Weighted Random Forest Algorithm. It used RGB-D value of an object as input to find a suitable rectangular grasping region as the output. To accomplish this task, it uses a subspace of most important features from a very high dimensional extensive feature space that contains both image and point cloud features. Usage of most important features in the grasping algorithm has enabled the system to be computationally very fast while preserving maximum information gain. In this approach, the Random Forest operates using optimum parameters e.g. Number of Trees, Number of Features at each node, Information Gain Criteria etc. ensures optimization in learning, with highest possible accuracy in minimum time in an advanced practical setting. The Weighted Random Forest chosen over Support Vector Machine (SVM), Decision Tree and Adaboost for implementation of the grasping system outperforms the stated machine learning algorithms both in training and testing accuracy and other performance estimates. The Grasping System utilizing learning from a score function detects the rectangular grasping region after selecting the top rectangle that has the largest score. The system is implemented and tested in a Baxter Research Robot with Parallel Plate Gripper in action.
|
8 |
A Deep-Learning-Based Approach for Stiffness Estimation of Deformable Objects / En djupinlärningsbaserad metod för elasticitetsuppskattning av deformerbara objektYang, Nan January 2022 (has links)
Object deformation is an essential factor for the robot to manipulate the object, as the deformation impacts the grasping of the deformable object either positively or negatively. One of the most challenging problems with deformable objects is estimating the stiffness parameters such as Young’s modulus and Poisson’s ratio. This thesis presents a learning-based approach to predicting the stiffness parameters of a 3D (volumetric) deformable object based on vision and haptic feedback. A deep learning network is designed to predict Young’s modulus of homogeneous isotropic deformable objects from the forces of squeezing the object and the depth images of the deformed part of the object. The results show that the developed method can estimate Young’s modulus of the selected synthetic objects in the validation samples dataset with 3.017% error upper bound on the 95% confidence interval. The conclusion is that this method contributes to predicting Young’s modulus of the homogeneous isotropic objects in the simulation environments. In future work, the diversity of the object shape samples can be expanded for broader application in predicting Young’s modulus. Besides, the method can also be extended to real-world objects after validating real-world experiments. / Objekt är en väsentlig faktor för roboten att manipulera objektet, eftersom det påverkar greppet om det deformerbara objektets deformation antingen positivt eller negativt. Ett av de mest utmanande problemen med deformerbara objekt är att uppskatta styvhetsparametrarna som Youngs modul och Poissons förhållande . Denna avhandling presenterar en inlärningsbaserad metod för att förutsäga styvhetsparametrarna för ett 3D (volumetriskt) deformerbart objekt baserat på syn och haptisk feedback. Ett nätverk för djupinlärning är utformat för att förutsäga Youngs modul av homogena isotropa deformerbara objekt från krafterna från att klämma ihop objektet och djupbilderna av den deformerade delen av objektet Resultaten visar att den utvecklade metoden kan uppskatta Youngs modul för de utvalda syntetiska objekten i valideringsexempeldatauppsättningen med 3.017% fel övre gräns på 95% konfidensintervall. Slutsatsen är att denna metod bidrar till att förutsäga Youngs modul för de homogena isotropa objekten i simuleringsmiljöerna. I framtida bredare arbete kan mångfalden av objektformproverna utökas för tillämpning vid förutsägelse av Youngs modul. Dessutom kan metoden också utvidgas till verkliga objekt efter validering av verkliga experiment.
|
9 |
Integration of a visual perception pipeline for object manipulation / Integration av en visuell perceptionssystem för objektmanipuleringShi, Xiyu January 2020 (has links)
The integration of robotic modules is common both in industry and in academia, especially when it comes to robotic grasping and object tracking. However, there are usually two challenges in the integration process. Firstly, the respective fields are extensive, making it challenging to select a method in each field for integration according to specific needs. Secondly, because the integrated system is rarely discussed in academia, there is no set of metrics to evaluate it. Aiming at the first challenge, this thesis reviews and categorizes popular methods in the fields of robotic grasping and object tracking, summarizing their advantages and disadvantages. This categorization provides the basis for selecting methods according to the specific needs of application scenarios. For evaluation, two well-established methods for grasp pose detection and object tracking are integrated for a common application scenario. Furthermore, the technical, as well as the task-related challenges of the integration process are discussed. Finally, in response to the second challenge, a set of metrics is proposed to evaluate the integrated system. / Integration av robotmoduler är vanligt förekommande både inom industrin och i den akademiska världen, särskilt när det gäller robotgrepp och objektspårning. Det finns dock vanligtvis två utmaningar i integrationsprocessen. För det första är både respektive fält omfattande, vilket gör det svårt att välja en metod inom varje fält för integration enligt relevanta behov. För det andra, eftersom det integrerade systemet sällan diskuteras i den akademiska världen, finns det inga etablerade mätvärden för att utvärdera det. För att fokusera på den första utmaningen, granskar och kategoriserar denna avhandling populära metoder inom robotgreppning och spårning av objekt, samt sammanfattar deras fördelar och nackdelar. Denna kategorisering utgör grunden för att välja metoder enligt de specifika behoven i olika applikationsscenarion. Som utvärdering, integreras samt jämförs två väletablerade metoder för posdetektion och objektspårning för ett vanligt applikationsscenario. Vidare diskuteras de tekniska och uppgiftsrelaterade utmaningarna i integrationsprocessen. Slutligen, som svar på den andra utmaningen, föreslås en uppsättning mätvärden för att utvärdera det integrerade systemet.
|
10 |
Natural Hand Based Interaction Simulation using a Digital HandVipin, J S January 2013 (has links) (PDF)
The focus of the present work is natural human like grasping, for realistic performance simulations in digital human modelling (DHM) environment.
The performance simulation for grasping in DHM is typically done through high level commands to the digital human models (DHMs). This calls for a natural and unambiguous scheme to describe a grasp which would implicitly accommodate variations due to the hand form, object form and hand kinematics. A novel relational description scheme is developed towards this purpose. The grasp is modelled as a spatio-temporal relationship between the patches (a closed region on the surface) in the hand and the object. The task dependency of the grasp affects only the choice of the relevant patches. Thus, the present scheme of grasp description enables a human like grasp description possible. Grasping can be simulated either in an interactive command mode as discussed above or in an autonomous mode. In the autonomous mode the patches have to be computed. It is done using a psychological concept, of affordance. This scheme is employed to select a tool from a set of tools. Various types of grasps a user may adopt while grasping a spanner for manipulating a nut is simulated.
Grasping of objects by human evolves through distinct naturally occurring phases, such as re-oreintation, transport and preshape. Hand is taken to the object ballpark using a novel concept of virtual object. Before contact establishment hand achieves the shape similar to the global shape of the object, called preshaping. Various hand preshape strategies are simulating using an optimization scheme. Since the focus of the present work is human like grasping, the mechanism which drives the DHMs should also be anatomically pertinent. A methodology is developed wherein the hand-object contact establishment is done based on the anatomical observation of logarithmic spiral pattern during finger flexion. The effect of slip in presence of friction has been studied for 2D and 3D object grasping endeavours and a computational generation of the slip locus is done. The in-grasp slip studies are also done which simulates the finger and object response to slip.
It is desirable that the grasping performance simulations be validated for diverse hands that people have. In the absence of an available database of articulated bio-fidelic digital hands, this work develops a semi-automatic methodology for developing subject specific hand models from a single pose 3D laser scan of the subject's hand. The methodology is based on the clinical evidence that creases and joint locations on human hand are strongly correlated. The hand scan is segmented into palm, wrist and phalanges, both manually and computationally. The computational segmentation is based on the crease markings in the hand scan, which is identified by explicitly painting them using a mesh processing software by the user. Joint locations are computed on this segmented hand. A 24 dof kinematic structure is automatically embedded into the hand scan. The joint axes are computed using a novel palm plane normal concept. The computed joint axes are rectified using the convergence, and intra-finger constraints. The methodology is significantly tolerant to the noise in the scan and the pose of the hand. With the proposed methodology articulated, realistic, custom hand models can be generated.
Thus, the reported work presents a geometric framework for comprehensive simulation of grasping performance in a DHM environment.
|
Page generated in 0.0889 seconds