Spelling suggestions: "subject:"humanrobot"" "subject:"humanoidrobot""
21 |
The design space for robot appearance and behaviour for social robot companionsWalters, Michael L. January 2008 (has links)
To facilitate necessary task-based interactions and to avoid annoying or upsetting people a domestic robot will have to exhibit appropriate non-verbal social behaviour. Most current robots have the ability to sense and control for the distance of people and objects in their vicinity. An understanding of human robot proxemic and associated non-verbal social behaviour is crucial for humans to accept robots as domestic or servants. Therefore, this thesis addressed the following hypothesis: Attributes of robot appearance, behaviour, task context and situation will affect the distances that people will find comfortable between themselves and a robot. Initial exploratory Human-Robot Interaction (HRI) experiments replicated human-human studies into comfortable approach distances with a mechanoid robot in place of one of the human interactors. It was found that most human participants respected the robot's interpersonal space and there were systematic differences for participants' comfortable approach distances to robots with different voice styles. It was proposed that greater initial comfortable approach distances to the robot were due to perceived inconsistencies between the robots overall appearance and voice style. To investigate these issues further it was necessary to develop HRI experimental set-ups, a novel Video-based HRI (VHRI) trial methodology, trial data collection methods and analytical methodologies. An exploratory VHRI trial then investigated human perceptions and preferences for robot appearance and non-verbal social behaviour. The methodological approach highlighted the holistic and embodied nature of robot appearance and behaviour. Findings indicated that people tend to rate a particular behaviour less favourably when the behaviour is not consistent with the robot’s appearance. A live HRI experiment finally confirmed and extended from these previous findings that there were multiple factors which significantly affected participants preferences for robot to human approach distances. There was a significant general tendency for participants to prefer either a tall humanoid robot or a short mechanoid robot and it was suggested that this may be due to participants internal or demographic factors. Participants' preferences for robot height and appearance were both found to have significant effects on their preferences for live robot to Human comfortable approach distances, irrespective of the robot type they actually encountered. The thesis confirms for mechanoid or humanoid robots, results that have previously been found in the domain of human-computer interaction (cf. Reeves & Nass (1996)), that people seem to automatically treat interactive artefacts socially. An original empirical human-robot proxemic framework is proposed in which the experimental findings from the study can be unified in the wider context of human-robot proxemics. This is seen as a necessary first step towards the desired end goal of creating and implementing a working robot proxemic system which can allow the robot to: a) exhibit socially acceptable social spatial behaviour when interacting with humans, b) interpret and gain additional valuable insight into a range of HRI situations from the relative proxemic behaviour of humans in the immediate area. Future work concludes the thesis.
|
22 |
Toward Enabling Safe & Efficient Human-Robot Manipulation in Shared WorkspacesHayne, Rafi 01 September 2016 (has links)
"When humans interact, there are many avenues of physical communication available ranging from vocal to physical gestures. In our past observations, when humans collaborate on manipulation tasks in shared workspaces there is often minimal to no verbal or physical communication, yet the collaboration is still fluid with minimal interferences between partners. However, when humans perform similar tasks in the presence of a robot collaborator, manipulation can be clumsy, disconnected, or simply not human-like. The focus of this work is to leverage our observations of human-human interaction in a robot's motion planner in order to facilitate more safe, efficient, and human-like collaborative manipulation in shared workspaces. We first present an approach to formulating the cost function for a motion planner intended for human-robot collaboration such that robot motions are both safe and efficient. To achieve this, we propose two factors to consider in the cost function for the robot's motion planner: (1) Avoidance of the workspace previously-occupied by the human, so robot motion is safe as possible, and (2) Consistency of the robot's motion, so that the motion is predictable as possible for the human and they can perform their task without focusing undue attention on the robot. Our experiments in simulation and a human-robot workspace sharing study compare a cost function that uses only the first factor and a combined cost that uses both factors vs. a baseline method that is perfectly consistent but does not account for the human's previous motion. We find using either cost function we outperform the baseline method in terms of task success rate without degrading the task completion time. The best task success rate is achieved with the cost function that includes both the avoidance and consistency terms. Next, we present an approach to human-attention aware robot motion generation which attempts to convey intent of the robot's task to its collaborator. We capture human attention through the combined use of a wearable eye-tracker and motion capture system. Since human attention isn't static, we present a method of generating a motion policy that can be queried online. Finally, we show preliminary tests of this method."
|
23 |
Designing and Evaluating Human-Robot Communication : Informing Design through Analysis of User InteractionGreen, Anders January 2009 (has links)
This thesis explores the design and evaluation of human-robot communication for service robots that use natural language to interact with people. The research is centred around three themes: design of human-robot communication; evaluation of miscommunication in human-robot communication; and the analysis of spatial influence as empiric phenomenon and design element. The method has been to put users in situations of future use through means of Hi-fi simulation. Several scenarios were enacted using the Wizard-of-Oz technique: a robot intended for fetch- and carry services in an office environment; and a robot acting in what can be characterised as a home tour, where the user teaches objects and locations to the robot. Using these scenarios a corpus of human-robot communication was developed and analysed. The analysis of the communicative behaviours led to the following observations: the users communicate with the robot in order to solve a main task goal. In order to fulfil this goal they overtake service actions that the robot is incapable of. Once users have understood that the robot is capable of performing actions, they explore its capabilities. During the interactions the users continuously monitor the behaviour of the robot, attempting to elicit feedback or to draw its perceptual attention to the users’ communicative behaviour. Information related to the communicative status of the robot seems to have a fundamental impact on the quality of interaction. Large portions of the miscommunication that occurs in the analysed scenarios can be attributed to ill-timed, lacking or irrelevant feedback from the robot. The analysis of the corpus data also showed that the users’ spatial behaviour seemed to be influenced by the robot’s communicative behaviour, embodiment and positioning. This means that we in robot design can consider the use strategies for spatial prompting to influence the users’ spatial behaviour. The understanding of the importance of continuously providing information of the communicative status of the robot to it’s users leaves us with an intriguing design challenge for the future: When designing communication for a service robot we need to design communication for the robot work tasks; and simultaneously, provide information based on the systems communicative status to continuously make users aware of the robots communicative capability. / QC 20100714
|
24 |
Requirements for effective collision detection on industrial serial manipulatorsSchroeder, Kyle Anthony 16 October 2013 (has links)
Human-robot interaction (HRI) is the future of robotics. It is essential in the expanding markets, such as surgical, medical, and therapy robots. However, existing industrial systems can also benefit from safe and effective HRI. Many robots are now being fitted with joint torque sensors to enable effective human-robot collision detection. Many existing and off-the-shelf industrial robotic systems are not equipped with these sensors. This work presents and demonstrates a method for effective collision detection on a system with motor current feedback instead of joint torque sensors. The effectiveness of this system is also evaluated by simulating collisions with human hands and arms. Joint torques are estimated from the input motor currents. The joint friction and hysteresis losses are estimated for each joint of an SIA5D 7 Degree of Freedom (DOF) manipulator. The estimated joint torques are validated by comparing to joint torques predicted by the recursive application of Newton-Euler equations. During a pick and place motion, the estimation error in joint 2 is less than 10 Newton meters. Acceleration increased the estimation uncertainty resulting in estimation errors of 20 Newton meters over the entire workspace. When the manipulator makes contact with the environment or a human, the same technique can be used to estimate contact torques from motor current. Current-estimated contact torque is validated against the calculated torque due to a measured force. The error in contact force is less than 10 Newtons. Collision detection is demonstrated on the SIA5D using estimated joint torques. The effectiveness of the collision detection is explored through simulated collisions with the human hands and arms. Simulated collisions are performed both for a typical pick and place motion as well as trajectories that transverse the entire workspace. The simulated forces and pressures are compared to acceptable maximums for human hands and arms. During pick and place motions with vertical and lateral end effector motions at 10mm/s and 25mm/s, the maximum forces and pressures remained below acceptable levels. At and near singular configurations some collisions can be difficult to detect. Fortunately, these configurations are generally avoided for kinematic reasons. / text
|
25 |
An Augmented Reality Human-Robot Collaboration SystemGreen, Scott Armstrong January 2008 (has links)
Although robotics is well established as a research field, there has been relatively little work on human-robot collaboration. This type of collaboration is going to become an increasingly important issue as robots work ever more closely with humans. Clearly, there is a growing need for research on human-robot collaboration and communication between humans and robotic systems.
Research into human-human communication can be used as a starting point in developing a robust human-robot collaboration system. Previous research into collaborative efforts with humans has shown that grounding, situational awareness, a common frame of reference and spatial referencing are vital in effective communication. Therefore, these items comprise a list of required attributes of an effective human-robot collaborative system.
Augmented Reality (AR) is a technology for overlaying three-dimensional virtual graphics onto the user's view of the real world. It also allows for real time interaction with these virtual graphics, enabling a user to reach into the augmented world and manipulate it directly. The internal state of a robot and its intended actions can be displayed through the virtual imagery in the AR environment. Therefore, AR can bridge the divide between human and robotic systems and enable effective human-robot collaboration.
This thesis describes the work involved in developing the Augmented Reality Human-Robot Collaboration (AR-HRC) System. It first garners design criteria for the system from a review of communication and collaboration in human-human interaction, the current state of Human-Robot Interaction (HRI) and related work in AR. A review of research in multimodal interfaces is then provided highlighting the benefits of using such an interface design. Therefore, an AR multimodal interface was developed to determine if this type of design improved performance over a single modality design. Indeed, the multimodal interface was found to improve performance, thereby providing the impetus to use a multimodal design approach for the AR-HRC system.
The architectural design of the system is then presented. A user study conducted to determine what kind of interaction people would use when collaborating with a mobile robot is discussed and then the integration of a mobile robot is described. Finally, an evaluation of the AR-HRC system is presented.
|
26 |
The development of a human-robot interface for industrial collaborative systemTang, Gilbert January 2016 (has links)
Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number of manufacturing applications involving complex tasks and inconstant components which prohibit the use of fully automated solutions in the foreseeable future. A breakthrough in robotic technologies and changes in safety legislations have supported the creation of robots that coexist and assist humans in industrial applications. It has been broadly recognised that human-robot collaborative systems would be a realistic solution as an advanced production system with wide range of applications and high economic impact. This type of system can utilise the best of both worlds, where the robot can perform simple tasks that require high repeatability while the human performs tasks that require judgement and dexterity of the human hands. Robots in such system will operate as “intelligent assistants”. In a collaborative working environment, robot and human share the same working area, and interact with each other. This level of interface will require effective ways of communication and collaboration to avoid unwanted conflicts. This project aims to create a user interface for industrial collaborative robot system through integration of current robotic technologies. The robotic system is designed for seamless collaboration with a human in close proximity. The system is capable to communicate with the human via the exchange of gestures, as well as visual signal which operators can observe and comprehend at a glance. The main objective of this PhD is to develop a Human-Robot Interface (HRI) for communication with an industrial collaborative robot during collaboration in proximity. The system is developed in conjunction with a small scale collaborative robot system which has been integrated using off-the-shelf components. The system should be capable of receiving input from the human user via an intuitive method as well as indicating its status to the user ii effectively. The HRI will be developed using a combination of hardware integrations and software developments. The software and the control framework were developed in a way that is applicable to other industrial robots in the future. The developed gesture command system is demonstrated on a heavy duty industrial robot.
|
27 |
Safety system design in human-robot collaboration : Implementation for a demonstrator case in compliance with ISO/TS 15066Schaffert, Carolin January 2019 (has links)
A close collaboration between humans and robots is one approach to achieve flexible production flows and a high degree of automation at the same time. In human-robot collaboration, both entities work alongside each other in a fenceless, shared environment. These workstations combine human flexibility, tactile sense and intelligence with robotic speed, endurance, and accuracy. This leads to improved ergonomic working conditions for the operator, better quality and higher efficiency. However, the widespread adoption of human-robot collaboration is limited by the current safety legislation. Robots are powerful machines and without spatial separation to the operator the risks drastically increase. The technical specification ISO/TS 15066 serves as a guideline for collaborative operations and supplements the international standard ISO 10218 for industrial robots. Because ISO/TS 15066 represents the first draft for a coming standard, companies have to gain knowledge in applying ISO/TS 15066. Currently, the guideline prohibits a collision with the head in transient contact. In this thesis work, a safety system is designed which is in compliance with ISO/TS 15066 and where certified safety technologies are used. Four theoretical safety system designs with a laser scanner as a presence sensing device and a collaborative robot, the KUKA lbr iiwa, are proposed. The system either stops the robot motion, reduces the robot’s speed and then triggers a stop or only activates a stop after a collision between the robot and the human occurred. In system 3 the size of the stop zone is decreased by combining the speed and separation monitoring principle with the power- and force-limiting safeguarding mode. The safety zones are static and are calculated according to the protective separation distance in ISO/TS 15066. A risk assessment is performed to reduce all risks to an acceptable level and lead to the final safety system design after three iterations. As a proof of concept the final safety system design is implemented for a demonstrator in a laboratory environment at Scania. With a feasibility study, the implementation differences between theory and praxis for the four proposed designs are identified and a feasible safety system behavior is developed. The robot reaction is realized through the safety configuration of the robot. There three ESM states are defined to use the internal safety functions of the robot and to integrate the laser scanner signal. The laser scanner is connected as a digital input to the discrete safety interface of the robot controller. To sum up, this thesis work describes the safety system design with all implementation details. / Ett nära samarbete mellan människor och robotar är ett sätt att uppnå flexibla produktionsflöden och en hög grad av automatisering samtidigt. I människa-robotsamarbeten arbetar båda enheterna tillsammans med varandra i en gemensam miljö utan skyddsstaket. Dessa arbetsstationer kombinerar mänsklig flexibilitet, taktil känsla och intelligens med robothastighet, uthållighet och noggrannhet. Detta leder till förbättrade ergonomiska arbetsförhållanden för operatören, bättre kvalitet och högre effektivitet. Det breda antagandet av människarobotsamarbeten är emellertid begränsat av den nuvarande säkerhetslagstiftningen. Robotar är kraftfulla maskiner och utan rymdseparation till operatören riskerna drastiskt ökar. Den tekniska specifikationen ISO / TS 15066 fungerar som riktlinje för samverkan och kompletterar den internationella standarden ISO 10218 för industrirobotar. Eftersom ISO / TS 15066 representerar det första utkastet för en kommande standard, måste företagen få kunskap om att tillämpa ISO / TS 15066. För närvarande förbjuder riktlinjen en kollision med huvudet i övergående kontakt. I detta avhandlingar är ett säkerhetssystem utformat som överensstämmer med ISO / TS 15066 och där certifierad säkerhetsteknik används. Fyra teoretiska säkerhetssystemdesigner med en laserskanner som närvarosensor och en samarbetsrobot, KUKA lbr iiwa, föreslås. Systemet stoppar antingen robotrörelsen, reducerar robotens hastighet och triggar sedan ett stopp eller aktiverar bara ett stopp efter en kollision mellan roboten och människan inträffade. I system 3 minskas storleken på stoppzonen genom att kombinera hastighets- och separationsövervakningsprincipen med det kraft- och kraftbegränsande skyddsläget. Säkerhetszoner är statiska och beräknas enligt skyddsavståndet i ISO / TS 15066. En riskbedömning görs för att minska alla risker till en acceptabel nivå och leda till den slutliga säkerhetssystemdesignen efter tre iterationer. Som ett bevis på konceptet är den slutliga säkerhetssystemdesignen implementerad för en demonstrant i en laboratoriemiljö hos Scania. Genom en genomförbarhetsstudie identifieras implementeringsskillnaderna mellan teori och praxis för de fyra föreslagna mönster och ett genomförbart säkerhetssystem beteende utvecklas. Robotreaktionen realiseras genom robotens säkerhetskonfiguration. Där definieras tre ESM-tillstånd för att använda robotens interna säkerhetsfunktioner och för att integrera laserscannersignalen. Laserskannern är ansluten som en digital ingång till robotkontrollens diskreta säkerhetsgränssnitt. Sammanfattningsvis beskriver detta avhandlingar säkerhetssystemdesignen med alla implementeringsdetaljer.
|
28 |
Using a Model of Temporal Latency to Improve Supervisory Control of Human-Robot TeamsBlatter, Kyle Lee 16 July 2014 (has links) (PDF)
When humans and remote robots work together on a team, the robots always interact with a human supervisor, even if the interaction is limited to occasional reports. Distracting a human with robotic interactions doesn't pose a problem so long as the inclusion of robots increases the team's overall effectiveness. Unfortunately, increasing the supervisor's cognitive load may decrease the team's sustainable performance to the point where robotic agents are more a liability than an asset. Present approaches resolve this problem with adaptive autonomy, where a robot changes its level of autonomy based on the supervisor's cognitive load. This thesis proposes to augment adaptive autonomy by modeling temporal latency and using this model to optimally select the temporal interval between when a supervisor is informed of a pending change and when the robot makes the change. This enables robotic team members to time their actions in response to the supervisor's cognitive load. The hypothesis is confirmed in a user-study where 26 participants interacted with a simulated search-and-rescue scenario.
|
29 |
Evaluating Human-robot Implicit Communication Through Human-human Implicit CommunicationRichardson, Andrew Xenos 01 January 2012 (has links)
Human-Robot Interaction (HRI) research is examining ways to make human-robot (HR) communication more natural. Incorporating natural communication techniques is expected to make HR communication seamless and more natural for humans. Humans naturally incorporate implicit levels of communication, and including implicit communication in HR communication should provide tremendous benefit. The aim for this work was to evaluate a model for humanrobot implicit communication. Specifically, the primary goal for this research was to determine whether humans can assign meanings to implicit cues received from autonomous robots as they do for identical implicit cues received from humans. An experiment was designed to allow participants to assign meanings to identical, implicit cues (pursuing, retreating, investigating, hiding, patrolling) received from humans and robots. Participants were tasked to view random video clips of both entity types, label the implicit cue, and assign a level of confidence in their chosen answer. Physiological data was tracked during the experiment using an electroencephalogram and eye-tracker. Participants answered workload and stress measure questionnaires following each scenario. Results revealed that participants were significantly more accurate with human cues (84%) than with robot cues (82%), however participants were highly accurate, above 80%, for both entity types. Despite the high accuracy for both types, participants remained significantly more confident in answers for humans (6.1) than for robots (5.9) on a confidence scale of 1 - 7. Subjective measures showed no significant differences for stress or mental workload across entities. Physiological measures were not significant for the engagement index across v entity, but robots resulted in significantly higher levels of cognitive workload for participants via the index of cognitive activity. The results of this study revealed that participants are more confident interpreting human implicit cues than identical cues received from a robot. However, the accuracy of interpreting both entities remained high. Participants showed no significant difference in interpreting different cues across entity as well. Therefore, much of the ability of interpreting an implicit cue resides in the actual cue rather than the entity. Proper training should boost confidence as humans begin to work alongside autonomous robots as teammates, and it is possible to train humans to recognize cues based on the movement, regardless of the entity demonstrating the movement.
|
30 |
Decision shaping and strategy learning in multi-robot interactionsValtazanos, Aris January 2013 (has links)
Recent developments in robot technology have contributed to the advancement of autonomous behaviours in human-robot systems; for example, in following instructions received from an interacting human partner. Nevertheless, increasingly many systems are moving towards more seamless forms of interaction, where factors such as implicit trust and persuasion between humans and robots are brought to the fore. In this context, the problem of attaining, through suitable computational models and algorithms, more complex strategic behaviours that can influence human decisions and actions during an interaction, remains largely open. To address this issue, this thesis introduces the problem of decision shaping in strategic interactions between humans and robots, where a robot seeks to lead, without however forcing, an interacting human partner to a particular state. Our approach to this problem is based on a combination of statistical modeling and synthesis of demonstrated behaviours, which enables robots to efficiently adapt to novel interacting agents. We primarily focus on interactions between autonomous and teleoperated (i.e. human-controlled) NAO humanoid robots, using the adversarial soccer penalty shooting game as an illustrative example. We begin by describing the various challenges that a robot operating in such complex interactive environments is likely to face. Then, we introduce a procedure through which composable strategy templates can be learned from provided human demonstrations of interactive behaviours. We subsequently present our primary contribution to the shaping problem, a Bayesian learning framework that empirically models and predicts the responses of an interacting agent, and computes action strategies that are likely to influence that agent towards a desired goal. We then address the related issue of factors affecting human decisions in these interactive strategic environments, such as the availability of perceptual information for the human operator. Finally, we describe an information processing algorithm, based on the Orient motion capture platform, which serves to facilitate direct (as opposed to teleoperation-mediated) strategic interactions between humans and robots. Our experiments introduce and evaluate a wide range of novel autonomous behaviours, where robots are shown to (learn to) influence a variety of interacting agents, ranging from other simple autonomous agents, to robots controlled by experienced human subjects. These results demonstrate the benefits of strategic reasoning in human-robot interaction, and constitute an important step towards realistic, practical applications, where robots are expected to be not just passive agents, but active, influencing participants.
|
Page generated in 0.0484 seconds