Spelling suggestions: "subject:"collaborative manipulation"" "subject:"kollaborative manipulation""
1 |
Robotic Grasping of Large Objects for Collaborative ManipulationTariq, Usama January 2017 (has links)
In near future, robots are envisioned to work alongside humans in professional anddomestic environments without significant restructuring of workspace. Roboticsystems in such setups must be adept at observation, analysis and rational de-cision making. To coexist in an environment, humans and robots will need tointeract and cooperate for multiple tasks. A fundamental such task is the manip-ulation of large objects in work environments which requires cooperation betweenmultiple manipulating agents for load sharing. Collaborative manipulation hasbeen studied in the literature with the focus on multi-agent planning and controlstrategies. However, for a collaborative manipulation task, grasp planning alsoplays a pivotal role in cooperation and task completion.In this work, a novel approach is proposed for collaborative grasping and manipu-lation of large unknown objects. The manipulation task was defined as a sequenceof poses and expected external wrench acting on the target object. In a two-agentmanipulation task, the proposed approach selects a grasp for the second agentafter observing the grasp location of the first agent. The solution is computed ina way that it minimizes the grasp wrenches by load sharing between both agents.To verify the proposed methodology, an online system for human-robot manipu-lation of unknown objects was developed. The system utilized depth informationfrom a fixed Kinect sensor for perception and decision making for a human-robotcollaborative lift-up. Experiments with multiple objects substantiated that theproposed method results in an optimal load sharing despite limited informationand partial observability.
|
2 |
Timing multimodal turn-taking in human-robot cooperative activityChao, Crystal 27 May 2016 (has links)
Turn-taking is a fundamental process that governs social interaction. When humans interact, they naturally take initiative and relinquish control to each other using verbal and nonverbal behavior in a coordinated manner. In contrast, existing approaches for controlling a robot's social behavior do not explicitly model turn-taking, resulting in interaction breakdowns that confuse or frustrate the human and detract from the dyad's cooperative goals. They also lack generality, relying on scripted behavior control that must be designed for each new domain. This thesis seeks to enable robots to cooperate fluently with humans by automatically controlling the timing of multimodal turn-taking. Based on our empirical studies of interaction phenomena, we develop a computational turn-taking model that accounts for multimodal information flow and resource usage in interaction. This model is implemented within a novel behavior generation architecture called CADENCE, the Control Architecture for the Dynamics of Embodied Natural Coordination and Engagement, that controls a robot's speech, gesture, gaze, and manipulation. CADENCE controls turn-taking using a timed Petri net (TPN) representation that integrates resource exchange, interruptible modality execution, and modeling of the human user. We demonstrate progressive developments of CADENCE through multiple domains of autonomous interaction encompassing situated dialogue and collaborative manipulation. We also iteratively evaluate improvements in the system using quantitative metrics of task success, fluency, and balance of control.
|
Page generated in 0.1323 seconds