11 |
Multi-Modal Scene Understanding for Robotic GraspingBohg, Jeannette January 2011 (has links)
Current robotics research is largely driven by the vision of creatingan intelligent being that can perform dangerous, difficult orunpopular tasks. These can for example be exploring the surface of planet mars or the bottomof the ocean, maintaining a furnace or assembling a car. They can also be more mundane such as cleaning an apartment or fetching groceries. This vision has been pursued since the 1960s when the first robots were built. Some of the tasks mentioned above, especially those in industrial manufacturing, arealready frequently performed by robots. Others are still completelyout of reach. Especially, household robots are far away from beingdeployable as general purpose devices. Although advancements have beenmade in this research area, robots are not yet able to performhousehold chores robustly in unstructured and open-ended environments givenunexpected events and uncertainty in perception and execution.In this thesis, we are analyzing which perceptual andmotor capabilities are necessaryfor the robot to perform common tasks in a household scenario. In that context, an essential capability is tounderstand the scene that the robot has to interact with. This involvesseparating objects from the background but also from each other.Once this is achieved, many other tasks becomemuch easier. Configuration of objectscan be determined; they can be identified or categorized; their pose can be estimated; free and occupied space in the environment can be outlined.This kind of scene model can then inform grasp planning algorithms to finally pick up objects.However, scene understanding is not a trivial problem and evenstate-of-the-art methods may fail. Given an incomplete, noisy andpotentially erroneously segmented scene model, the questions remain howsuitable grasps can be planned and how they can be executed robustly.In this thesis, we propose to equip the robot with a set of predictionmechanisms that allow it to hypothesize about parts of the sceneit has not yet observed. Additionally, the robot can alsoquantify how uncertain it is about this prediction allowing it toplan actions for exploring the scene at specifically uncertainplaces. We consider multiple modalities includingmonocular and stereo vision, haptic sensing and information obtainedthrough a human-robot dialog system. We also study several scene representations of different complexity and their applicability to a grasping scenario. Given an improved scene model from this multi-modalexploration, grasps can be inferred for each objecthypothesis. Dependent on whether the objects are known, familiar orunknown, different methodologies for grasp inference apply. In thisthesis, we propose novel methods for each of these cases. Furthermore,we demonstrate the execution of these grasp both in a closed andopen-loop manner showing the effectiveness of the proposed methods inreal-world scenarios. / <p>QC 20111125</p> / GRASP
|
12 |
Functional aspects of colour processing within the human brainGeorgescu, Andrei 01 May 2006
In a seminal work, Ungerleider and Mishkin (1982) offered substantial evidence that two separate visual pathways coding what/where-- exist within the primate brain. Recently, human evidence has resulted in the what/where pathways being reconsidered in terms of ventral stream (vision for perception) and dorsal stream (vision for action; Goodale & Milner, 1992). Consistently, many studies have demonstrated that there is an overrepresentation of magnocellular (luminance) information within the dorsal stream; parvocellular input (colour, shape, consistancy) represents the primary source of information for the ventral stream. Although luminance contrast is important in perceiving moving objects, colour discrepancies help the visual system to identify the detailed characteristics of the environment and, subsequently, to prepare the motor system for action. This thesis endeavors to determine the role played by colour, in contrast with luminance, in influencing the programming and control movement production. Using a grasping paradigm and two different luminance conditions (iso-luminance vs. heteroluminance) within two separate experiments (experiment 1 programming; experiment 2 online control), we show that chromatic information can be successfully be used by motor circuits to complete the grasping task faultlessly. Although significant temporal delays in terms of reaction time and movement time between colour and luminance processing are identified, the human visual system seems able to fully integrate colour features for action with no significant spatial error cost.
|
13 |
Bimanual prehension to a solitary targetClarke, Nicky 20 August 2007
Grasping and functionally interacting with a relatively large or awkwardly shaped object requires the independent and cooperative coordination of both limbs. Acknowledging the vital role of visual information in successfully executing any prehensile movements, the present study aimed to clarify how well existing bimanual coordination models (Kelso et al, 1979; Marteniuk & Mackenzie, 1980) can account for bimanual prehension movements targeting a single end-point under varying visual conditions. We therefore, employed two experiments in which vision of the target object and limbs was available or unavailable during a bimanual movement in order to determine the affects of visual or memory-guided control (e.g. feedback vs. feed forward) on limb coordination.<p>Ten right-handed participants (mean age = 24.5) performed a specific bimanual prehension movement targeting a solitary, static object under both visual closed loop (CL) and open loop 2s delay (OL2) conditions. Target location was varied while target amplitude remained constant. Kinematic data (bimanual coupling variables) indicated that regardless of target location, participants employed one of two highly successful movement execution strategies depending on visual feedback availability. During visual (CL) conditions participants employed a dominant-hand initiation strategy characterized by a significantly faster right-hand (RH) reaction time and simultaneous hand contact with the target. In contrast, when no visual feedback was available (OL2), participants utilized a search and follow strategy characterized by limb coupling at movement onset and a reliance on the dominant RH to contact the target ~62 ms before the left.<p>In conclusion, the common goal parameters of targeting a single object with both hands are maintained and successfully achieved regardless of visual condition. Furthermore, independent programming of each limb is undeniably evident within the behaviours observed providing support for the neural cross-talk theory of bimanual coordination (Marteniuk & Mackenzie, 1980). Whether movement execution is visually (CL) or memory-guided (OL2) there is a clear preference of RH utilization possibly due to its dynamic and/or hemispheric advantages in controlling complex motor behaviours (Gonzalez et al., 2006). Therefore, we propose that bimanual grasping to a solitary target is possibly governed globally by a higher-level structure and successful execution is achieved via independent spinal pathway modulation of limbs.
|
14 |
Functional aspects of colour processing within the human brainGeorgescu, Andrei 01 May 2006 (has links)
In a seminal work, Ungerleider and Mishkin (1982) offered substantial evidence that two separate visual pathways coding what/where-- exist within the primate brain. Recently, human evidence has resulted in the what/where pathways being reconsidered in terms of ventral stream (vision for perception) and dorsal stream (vision for action; Goodale & Milner, 1992). Consistently, many studies have demonstrated that there is an overrepresentation of magnocellular (luminance) information within the dorsal stream; parvocellular input (colour, shape, consistancy) represents the primary source of information for the ventral stream. Although luminance contrast is important in perceiving moving objects, colour discrepancies help the visual system to identify the detailed characteristics of the environment and, subsequently, to prepare the motor system for action. This thesis endeavors to determine the role played by colour, in contrast with luminance, in influencing the programming and control movement production. Using a grasping paradigm and two different luminance conditions (iso-luminance vs. heteroluminance) within two separate experiments (experiment 1 programming; experiment 2 online control), we show that chromatic information can be successfully be used by motor circuits to complete the grasping task faultlessly. Although significant temporal delays in terms of reaction time and movement time between colour and luminance processing are identified, the human visual system seems able to fully integrate colour features for action with no significant spatial error cost.
|
15 |
Bimanual prehension to a solitary targetClarke, Nicky 20 August 2007 (has links)
Grasping and functionally interacting with a relatively large or awkwardly shaped object requires the independent and cooperative coordination of both limbs. Acknowledging the vital role of visual information in successfully executing any prehensile movements, the present study aimed to clarify how well existing bimanual coordination models (Kelso et al, 1979; Marteniuk & Mackenzie, 1980) can account for bimanual prehension movements targeting a single end-point under varying visual conditions. We therefore, employed two experiments in which vision of the target object and limbs was available or unavailable during a bimanual movement in order to determine the affects of visual or memory-guided control (e.g. feedback vs. feed forward) on limb coordination.<p>Ten right-handed participants (mean age = 24.5) performed a specific bimanual prehension movement targeting a solitary, static object under both visual closed loop (CL) and open loop 2s delay (OL2) conditions. Target location was varied while target amplitude remained constant. Kinematic data (bimanual coupling variables) indicated that regardless of target location, participants employed one of two highly successful movement execution strategies depending on visual feedback availability. During visual (CL) conditions participants employed a dominant-hand initiation strategy characterized by a significantly faster right-hand (RH) reaction time and simultaneous hand contact with the target. In contrast, when no visual feedback was available (OL2), participants utilized a search and follow strategy characterized by limb coupling at movement onset and a reliance on the dominant RH to contact the target ~62 ms before the left.<p>In conclusion, the common goal parameters of targeting a single object with both hands are maintained and successfully achieved regardless of visual condition. Furthermore, independent programming of each limb is undeniably evident within the behaviours observed providing support for the neural cross-talk theory of bimanual coordination (Marteniuk & Mackenzie, 1980). Whether movement execution is visually (CL) or memory-guided (OL2) there is a clear preference of RH utilization possibly due to its dynamic and/or hemispheric advantages in controlling complex motor behaviours (Gonzalez et al., 2006). Therefore, we propose that bimanual grasping to a solitary target is possibly governed globally by a higher-level structure and successful execution is achieved via independent spinal pathway modulation of limbs.
|
16 |
A shape primitive-based grasping strategy using visual object recognition in confined, hazardous environmentsBrabec, Cheryl Lynn 24 March 2014 (has links)
Grasping can be a complicated process for robotics due to the replication of human fine motor skills and typically high degrees of freedom in robotic hands. Robotic hands that are underactuated provide a method by which grasps can be executed without the onerous task of calculating every fingertip placement. The general shape configuration modes available to underactuated hands lend themselves well to an approach of grasping by shape primitives, and especially so when applied to gloveboxes in the nuclear domain due to the finite number of objects anticipated and the safe assumption that objects in the set are rigid. Thus, the object set found in a glovebox can be categorized as a small set of primitives such as cylinders, cubes, and bowls/hemispheres, etc. These same assumptions can also be leveraged for reliable identification and pose estimation within a glovebox. This effort develops and simulates a simple, but robust and effective grasp planning algorithm for a 7DOF industrial robot and three fingered dexterous, but underactuated robotic hand. The proposed grasping algorithm creates a grasp by generating a vector to the object from the base of the robot and manipulating that vector to be in a suitable starting location for a grasp. The grasp preshapes are selected to match shape primitives and are built-in to the Robotiq gripper used for algorithm demonstration purposes. If a grasp is found to be unsuitable via an inverse kinematics solution check, the algorithm procedurally generates additional grasps to try based on object geometry until a solution can be found or all possibilities are exhausted. The algorithm was tested and found capable of generating valid grasps for visually identified objects, and can recalculate grasps if one is found to be incompatible with the current kinematics of the robotic arm. / text
|
17 |
Effects of passive parallel compliance in tendon-driven robotic handsNiehues, Taylor D. 24 March 2014 (has links)
Humans utilize the inherent biomechanical compliance present in their fingers for increased stability and dexterity during manipulation tasks. While series elastic actuation has been explored, little research has been performed on the role of joint compliance arranged in parallel with the actuators. The goal of this thesis is to demonstrate, through simulation studies and experimental analyses, the advantages gained by employing human-like passive compliance in finger joints when grasping. We first model two planar systems: a single 2-DOF (degree of freedom) finger and a pair of 2-DOF fingers grasping an object. In each case, combinations of passive joint compliance and active stiffness control are implemented, and the impulse disturbance responses are compared. The control is carried out at a limited sampling frequency, and an energy analysis is performed to investigate stability. Our approach reveals that limited controller frequency leads to increased actuator energy input and hence a less stable system, and human-like passive parallel compliance can improve stability and robustness during grasping tasks. Then, an experimental setup is designed consisting of dual 2-DOF tendon-driven fingers. An impedance control law for two-fingered object manipulation is developed, using a novel friction compensation technique for improved actuator force control. This is used to experimentally quantify the advantages of parallel compliance during dexterous manipulation tasks, demonstrating smoother trajectory tracking and improved stability and robustness to impacts. / text
|
18 |
Sensing and Control for Robust Grasping with Simple HardwareJentoft, Leif Patrick 06 June 2014 (has links)
Robots can move, see, and navigate in the real world outside carefully structured factories, but they cannot yet grasp and manipulate objects without human intervention. Two key barriers are the complexity of current approaches, which require complicated hardware or precise perception to function effectively, and the challenge of understanding system performance in a tractable manner given the wide range of factors that impact successful grasping. This thesis presents sensors and simple control algorithms that relax the requirements on robot hardware, and a framework to understand the capabilities and limitations of grasping systems. / Engineering and Applied Sciences
|
19 |
From vision to action: Hand representations in macaque grasping areas AIP, F5, and M1Schaffelhofer, Stefan 29 July 2014 (has links)
No description available.
|
20 |
A Teleological Approach to Robot Programming by DemonstrationSweeney, John Douglas 01 February 2011 (has links)
This dissertation presents an approach to robot programming by demonstration based on two key concepts: demonstrator intent is the most meaningful signal that the robot can observe, and the robot should have a basic level of behavioral competency from which to interpret observed actions. Intent is a teleological, robust teaching signal invariant to many common sources of noise in training. The robot can use the knowledge encapsulated in sensorimotor schemas to interpret the demonstration. Furthermore, knowledge gained in prior demonstrations can be applied to future sessions. I argue that programming by demonstration be organized into declarative and pro-cedural components. The declarative component represents a reusable outline of underlying behavior that can be applied to many different contexts. The procedural component represents the dynamic portion of the task that is based on features observed at run time. I describe how statistical models, and Bayesian methods in particular, can be used to model these components. These models have many features that are beneficial for learning in this domain, such as tolerance for uncertainty, and the ability to incorporate prior knowledge into inferences. I demonstrate this architecture through experiments on a bimanual humanoid robot using tasks from the pick and place domain. Additionally, I develop and experimentally validate a model for generating grasp candidates using visual features that is learned from demonstration data. This model is especially useful in the context of pick and place tasks.
|
Page generated in 0.0677 seconds