Spelling suggestions: "subject:"humanrobot interaction"" "subject:"humanoidrobot interaction""
21 |
Decision shaping and strategy learning in multi-robot interactionsValtazanos, Aris January 2013 (has links)
Recent developments in robot technology have contributed to the advancement of autonomous behaviours in human-robot systems; for example, in following instructions received from an interacting human partner. Nevertheless, increasingly many systems are moving towards more seamless forms of interaction, where factors such as implicit trust and persuasion between humans and robots are brought to the fore. In this context, the problem of attaining, through suitable computational models and algorithms, more complex strategic behaviours that can influence human decisions and actions during an interaction, remains largely open. To address this issue, this thesis introduces the problem of decision shaping in strategic interactions between humans and robots, where a robot seeks to lead, without however forcing, an interacting human partner to a particular state. Our approach to this problem is based on a combination of statistical modeling and synthesis of demonstrated behaviours, which enables robots to efficiently adapt to novel interacting agents. We primarily focus on interactions between autonomous and teleoperated (i.e. human-controlled) NAO humanoid robots, using the adversarial soccer penalty shooting game as an illustrative example. We begin by describing the various challenges that a robot operating in such complex interactive environments is likely to face. Then, we introduce a procedure through which composable strategy templates can be learned from provided human demonstrations of interactive behaviours. We subsequently present our primary contribution to the shaping problem, a Bayesian learning framework that empirically models and predicts the responses of an interacting agent, and computes action strategies that are likely to influence that agent towards a desired goal. We then address the related issue of factors affecting human decisions in these interactive strategic environments, such as the availability of perceptual information for the human operator. Finally, we describe an information processing algorithm, based on the Orient motion capture platform, which serves to facilitate direct (as opposed to teleoperation-mediated) strategic interactions between humans and robots. Our experiments introduce and evaluate a wide range of novel autonomous behaviours, where robots are shown to (learn to) influence a variety of interacting agents, ranging from other simple autonomous agents, to robots controlled by experienced human subjects. These results demonstrate the benefits of strategic reasoning in human-robot interaction, and constitute an important step towards realistic, practical applications, where robots are expected to be not just passive agents, but active, influencing participants.
|
22 |
A developmental model of trust in humanoid robotsPatacchiola, Massimiliano January 2018 (has links)
Trust between humans and artificial systems has recently received increased attention due to the widespread use of autonomous systems in our society. In this context trust plays a dual role. On the one hand it is necessary to build robots that are perceived as trustworthy by humans. On the other hand we need to give to those robots the ability to discriminate between reliable and unreliable informants. This thesis focused on the second problem, presenting an interdisciplinary investigation of trust, in particular a computational model based on neuroscientific and psychological assumptions. First of all, the use of Bayesian networks for modelling causal relationships was investigated. This approach follows the well known theory-theory framework of the Theory of Mind (ToM) and an established line of research based on the Bayesian description of mental processes. Next, the role of gaze in human-robot interaction has been investigated. The results of this research were used to design a head pose estimation system based on Convolutional Neural Networks. The system can be used in robotic platforms to facilitate joint attention tasks and enhance trust. Finally, everything was integrated into a structured cognitive architecture. The architecture is based on an actor-critic reinforcement learning framework and an intrinsic motivation feedback given by a Bayesian network. In order to evaluate the model, the architecture was embodied in the iCub humanoid robot and used to replicate a developmental experiment. The model provides a plausible description of children's reasoning that sheds some light on the underlying mechanism involved in trust-based learning. In the last part of the thesis the contribution of human-robot interaction research is discussed, with the aim of understanding the factors that influence the establishment of trust during joint tasks. Overall, this thesis provides a computational model of trust that takes into account the development of cognitive abilities in children, with a particular emphasis on the ToM and the underlying neural dynamics.
|
23 |
Adaptive Optimal Control in Physical Human-Robot InteractionJanuary 2019 (has links)
abstract: What if there is a way to integrate prosthetics seamlessly with the human body and robots could help improve the lives of children with disabilities? With physical human-robot interaction being seen in multiple aspects of life, including industry, medical, and social, how these robots are interacting with human becomes even more important. Therefore, how smoothly the robot can interact with a person will determine how safe and efficient this relationship will be. This thesis investigates adaptive control method that allows a robot to adapt to the human's actions based on the interaction force. Allowing the relationship to become more effortless and less strained when the robot has a different goal than the human, as seen in Game Theory, using multiple techniques that adapts the system. Few applications this could be used for include robots in physical therapy, manufacturing robots that can adapt to a changing environment, and robots teaching people something new like dancing or learning how to walk after surgery.
The experience gained is the understanding of how a cost function of a system works, including the tracking error, speed of the system, the robot’s effort, and the human’s effort. Also, this two-agent system, results into a two-agent adaptive impedance model with an input for each agent of the system. This leads to a nontraditional linear quadratic regulator (LQR), that must be separated and then added together. Thus, creating a traditional LQR. This new experience can be used in the future to help build better safety protocols on manufacturing robots. In the future the knowledge learned from this research could be used to develop technologies for a robot to allow to adapt to help counteract human error. / Dissertation/Thesis / Masters Thesis Engineering 2019
|
24 |
From Qualitative to Quantitative: Supporting Robot Understanding in Human-Interactive Path PlanningYi, Daqing 01 August 2016 (has links)
Improvements in robot autonomy are changing human-robot interaction from low-level manipulation to high-level task-based collaboration. When a robot can independently and autonomously executes tasks, a human in a human-robot team acts as a collaborator or task supervisor instead of a tele-operator. When applying this to planning paths for a robot's motion, it is very important that the supervisor's qualitative intent is translated into aquantitative model so that the robot can produce a desirable consequence. In robotic path planning, algorithms can transform a human's qualitative requirement into a robot's quantitative model so that the robot behavior satisfies the human's intent. In particular, algorithms can be created that allow a human to express multi-objective and topological preferences, and can be built to use verbal communication. This dissertation presents a series of robot motion-planning algorithms, each of which is designed to support some aspect of a human's intent. Specifically, we present algorithms for the following problems: planning with a human-motion constraint, planning with a topological requirement, planning with multiple objectives, and creating models of constraints, requirements, and objectives from verbal instructions. These algorithms create a set of robot behaviors that support flexible decision-making over a range of complex path-planning tasks.
|
25 |
From HCI to HRI : Designing Interaction for a Service RobotHüttenrauch, Helge January 2006 (has links)
Service robots are mobile, embodied artefacts that operate in co presence with their users. This is a challenge for human-robot interaction (HRI) design. The robot’s interfaces must support users in understanding the system’s current state and possible next actions. One aspect in the design for such interaction is to understand users’ preferences and expectations by involving them in the design process. This thesis takes a user-centered design (UCD) perspective and tries to understand the different user roles that exist in service robotics in order to consider possible design implications. Another important aim in the thesis is to understand the spatial management that occurs in face-to-face encounters between humans and robotic systems. The Cero robot is an office “fetch-and-carry” robot that supports a user in the transportation of light objects in an office environment. The iterative, user-centered design of the graphical-user interface (GUI) for the Cero robot is presented in Paper I. It is based upon the findings from multiple prototype design- and evaluation iterations. The GUI is one of the robot’s interfacing components, i.e., it is to be seen in the overall interplay of the robot’s physical design and other interface modalities developed in parallel with the GUI. As interaction strategy for the GUI, a graphical representation based upon simplification of the graphical elements as well as hiding the robot system’s complexity in sensing and mission execution is recommended. The usage of the Cero robot by a motion-impaired user over a period of three months is presented in Paper II. This longitudinal user study aims to gain insights into the daily usage of such an assistive robot. This approach is complementary to the described GUI design and development process as it allows empirically investigating situated use of the Cero robot as novel service application over a longer period of time with the provided interfaces. Findings from this trial show that the robot and its interfaces provide a benefit to the user in the transport of light objects, but also implies increased independence. The long-term study also reveals further aspects of the Cero robot system usage as part of a workplace setting, including the social context that such a mobile, embodied system needs to be designed for. During the long-term user study, bystanders in the operation area of the Cero robot were observed in their attempt to interact with it. To understand better how such bystander users may shape the interaction with a service robot system, an experimental study investigates this special type and role of robot users in Paper III. A scenario in which the Cero robot addresses and asks invited trial subjects for a cup of coffee is described. The findings show that the level of occupation significantly influences bystander users’ willingness to assist the Cero robot with its request. The joint handling of space is an important part of HRI, as both users and service robots are mobile and often co-present during interaction. To inform the development of future robot locomotion behaviors and interaction design strategies, a Wizard-of Oz (WOZ) study is presented in Paper IV that explores the role of posture and positioning in HRI. The interpersonal distances and spatial formations that were observed during this trial are quantified and analyzed in a joint interaction task between a robot and its users in Paper V. Findings show that a face-to-face spatial formation and a distance between ~46 to ~122 cm is dominant while initiating a robot mission or instructing it about an object or place. Paper VI investigates another aspect on the role of spatial management in the joint task between a robot and its user based upon the study described in Papers IV and V. Taking the dynamics of interaction into account, the findings are that users structure their activities with the robot and that this organizing is observable as small movements in interaction. These small adaptations in posture and orientation signify the transition between different episodes of interaction and prepare for the next interaction exchange in the shared space. The understanding of these spatial management behaviors allow designing human-robot interaction based upon such awareness and active handling of space as a structuring interaction element. / QC 20100617
|
26 |
A Novel Approach for Performance Assessment of Human-Robotic InteractionAbou Saleh, Jamil 16 March 2012 (has links)
Robots have always been touted as powerful tools that could be used effectively in a number of applications ranging from automation to human-robot interaction. In order for such systems to operate adequately and safely in the real world, they must be able to perceive, and must have abilities of reasoning up to a certain level. Toward this end, performance evaluation metrics are used as important measures. This research work is intended to be a further step toward identifying common metrics for task-oriented human-robot interaction. We believe that within the context of human-robot interaction systems, both humans' and robots' actions and interactions (jointly and independently) can significantly affect the quality of the accomplished task. As such, our goal becomes that of providing a foundation upon which we can assess how well the human and the robot perform as a team. Thus, we propose a generic performance metric to assess the performance of the human-robot team, where one or more robots are involved. Sequential and parallel robot cooperation schemes with varying levels of task dependency are considered, and the proposed performance metric is augmented and extended to accommodate such scenarios. This is supported by some intuitively derived mathematical models and some advanced numerical simulations. To efficiently model such a metric, we propose a two-level fuzzy temporal model to evaluate and estimate the human trust in automation, while collaborating and interacting with robots and machines to complete some tasks. Trust modelling is critical, as it directly influences the interaction time that should be directly and indirectly dedicated toward interacting with the robot. Another fuzzy temporal model is also presented to evaluate the human reliability during interaction time. A significant amount of research work stipulates that system failures are due almost equally to humans as to machines, and therefore, assessing this factor in human-robot interaction systems is crucial. The proposed framework is based on the most recent research work in the areas of human-machine interaction and performance evaluation metrics. The fuzzy knowledge bases are further updated by implementing an application robotic platform where robots and users interact via semi-natural language to achieve tasks with varying levels of complexity and completion rates. User feedback is recorded and used to tune the knowledge base where needed. This work intends to serve as a foundation for further quantitative research to evaluate the performance of the human-robot teams in achievement of collective tasks.
|
27 |
Initial steps toward human augmented mappingTopp, Elin Anna January 2006 (has links)
<p>With the progress in research and product development humans and robots get more and more close to each other and the idea of a personalised general service robot is not too far fetched. Crucial for such a service robot is the ability to navigate in its working environment. The environment has to be assumed an arbitrary domestic or office-like environment that has to be shared with human users and bystanders. With methods developed and investigated in the field of simultaneous localisation and mapping it has become possible for mobile robots to explore and map an unknown environment, while they can stay localised with respect to their starting point and the surroundings. These approaches though do not consider the representation of the environment that is used by humans to refer to particular places. Robotic maps are often metric representations of features that could be obtained from sensory data. Humans have a more topological, in fact partially hierarchical way of representing environments. Especially for the communication between a user and her personal robot it is thus necessary to provide a link between the robotic map and the human understanding of the robot's workspace.</p><p>The term Human Augmented Mapping is used for a framework that allows to integrate a robotic map with human concepts. Communication about the environment can thus be facilitated. By assuming an interactive setting for the map acquisition process it is possible for the user to influence the process significantly. Personal preferences can be made part of the environment representation that the robot acquires. Advantages become also obvious for the mapping process itself, since in an interactive setting the robot could ask for information and resolve ambiguities with the help of the user. Thus, a scenario of a "guided tour" in which a user can ask a robot to follow and present the surroundings is assumed as the starting point for a system for the integration of robotic mapping, interaction and human environment representations.</p><p>Based on results from robotics research, psychology, human-robot interaction and cognitive science a general architecture for a system for Human Augmented Mapping is presented. This architecture combines a hierarchically organised robotic mapping approach with interaction abilities with the help of a high-level environment model. An initial system design and implementation that combines a tracking and following approach with a mapping system is described. Observations from a pilot study in which this initial system was used successfully are reported and support the assumptions about the usefulness of the environment model that is used as the link between robotic and human representation.</p>
|
28 |
Understanding Anthropomorphism in the Interaction Between Users and RobotsZlotowski, Jakub Aleksander January 2015 (has links)
Anthropomorphism is a common phenomenon when people attribute human characteristics to non-human objects. It plays an important role in acceptance of robots in natural human environments. Various studies in the field of Human-Robot Interaction (HRI) show that there are various factors that can affect the extent to which a robot is anthropomorphized. However, our knowledge of this phenomenon is segmented, as there is a lack of a coherent model of anthropomorphism that could consistently explain these findings. A robot should be able to adjust its level of anthropomorphism to a level that can optimize its task performance. In order to do that, robotic system designers must know which characteristics affect the perception of robots' anthropomorphism. Currently, existing models of anthropomorphism emphasize the importance of the context and perceiver in this phenomenon, but provide little guidelines regarding the factors of a perceived object that are affecting it.
The proposed reverse process to anthropomorphization is known as dehumanization. In the recent years research in social psychology has found which characteristics are deprived from people who are perceived as subhumans or are objectified. Furthermore, the process of dehumanization is two dimensional rather than unidimensional. This thesis discusses a model of anthropomorphism that uses characteristics from both dimensions of dehumanization and those relating to robots' physical appearance to affect the anthropomorphism of a robot. Furthermore, involvement of implicit and explicit processes in anthropomorphization are discussed.
In this thesis I present five empirical studies that were conducted to explore anthropomorphism in HRI. Chapter 3 discusses development and validation of a cognitive measurement of humanlikeness using the magnitude of the inversion effect. Although robot stimuli were processed more similarly to human stimuli rather than objects and induced the inversion effect, the results suggest that this measure has limited potential for measuring humanlikeness due to the low variance that it can explain. The second experiment, presented in Chapter 4 explored the involvement of Type I and Type II processing in anthropomorphism. The main findings of this study suggest that anthropomorphism is not a result of a dual-process and self-reports have a potential to be suitable measurement tools of anthropomorphism.
Chapter 5 presents the first empirical work on the dimensionality of anthropomorphism. Only perceived emotionality of a robot, but not its perceived intelligence, affects its anthropomorphization. This finding is further supported by a follow up experiment, presented in Chapter 6, that shows that Human Uniqueness dimension is less relevant for a robot's anthropomorphiazability than Human Nature (HN) dimension. Intentionality of a robot did not result in its higher anthropomorphizability. Furthermore, this experiment showed that humanlike appearance of a robot is not linearly related with its anthropomorphism during HRI. The lack of linear relationship between humanlike appearance and attribution of HN traits to a robot during HRI is further supported by the study described in Chapter 7. This last experiment shows also that another factor of HN, sociability, affects the extent to which a robot is anthropomorphized and therefore the relevance of HN dimension in the process of anthropomorphization.
This thesis elaborates on the process of anthropomorphism as an important factor affecting HRI. Without fully understanding the process itself and what factors make robots to be anthropomorphized it is hard to measure the impact of anthropomorphism on HRI. It is hoped that understanding anthropomorphism in HRI will make it possible to design interactions in a way that optimizes the benefits of that phenomenon for an interaction.
|
29 |
A Dog Tail Interface for Communicating Affective States of Utility RobotsSingh, Ashish January 2012 (has links)
As robots continue to enter people's spaces and environments, it will be increasingly important to have effective interfaces for interaction and communication. One such aspect of this communication is people's awareness of the robot's actions and state. We believe that using high-level state representations, as a peripheral awareness channel, will help people to be aware of the robotic states in an easy to understand way. For example, when a robot is boxed in a small area, it can suggest a negative robot state (e.g., not willing to work in a small area as it cannot clean the entire room) by appearing unhappy to people. To investigate this, we built a robotic dog tail prototype and conducted a study to investigate how different tail motions (based on several motion parameters, e.g., speed) influence people’s perceptions of the robot. The results from this study formed design guidelines that Human-Robot Interaction (HRI) designers can leverage to convey robotic states.
Further, we evaluated our overall approach and tested these guidelines by conducting a design workshop with interaction designers where we asked them to use the guidelines to design tail behaviors for various robotic states (e.g., looking for dirt) for robots working in different environments (e.g., domestic service). Results from this workshop helped in improving the confusing parts in our guidelines and making them easy to use by the designers. In conclusion, this thesis presents a set of solidified design guidelines that can be leveraged by HRI designers to convey the states of robots in a way that people can readily understand when and how to interact with them.
|
30 |
Planning and Sequencing Through Multimodal Interaction for Robot ProgrammingAkan, Batu January 2014 (has links)
Over the past few decades the use of industrial robots has increased the efficiency as well as the competitiveness of several sectors. Despite this fact, in many cases robot automation investments are considered to be technically challenging. In addition, for most small and medium-sized enterprises (SMEs) this process is associated with high costs. Due to their continuously changing product lines, reprogramming costs are likely to exceed installation costs by a large margin. Furthermore, traditional programming methods of industrial robots are too complex for most technicians or manufacturing engineers, and thus assistance from a robot programming expert is often needed. The hypothesis is that in order to make the use of industrial robots more common within the SME sector, the robots should be reprogrammable by technicians or manufacturing engineers rather than robot programming experts. In this thesis, a novel system for task-level programming is proposed. The user interacts with an industrial robot by giving instructions in a structured natural language and by selecting objects through an augmented reality interface. The proposed system consists of two parts: (i) a multimodal framework that provides a natural language interface for the user to interact in which the framework performs modality fusion and semantic analysis, (ii) a symbolic planner, POPStar, to create a time-efficient plan based on the user's instructions. The ultimate goal of this work in this thesis is to bring robot programming to a stage where it is as easy as working together with a colleague.This thesis mainly addresses two issues. The first issue is a general framework for designing and developing multimodal interfaces. The general framework proposed in this thesis is designed to perform natural language understanding, multimodal integration and semantic analysis with an incremental pipeline. The framework also includes a novel multimodal grammar language, which is used for multimodal presentation and semantic meaning generation. Such a framework helps us to make interaction with a robot easier and more natural. The proposed language architecture makes it possible to manipulate, pick or place objects in a scene through high-level commands. Interaction with simple voice commands and gestures enables the manufacturing engineer to focus on the task itself, rather than the programming issues of the robot. The second issue addressed is due to inherent characteristics of communication with the use of natural language; instructions given by a user are often vague and may require other actions to be taken before the conditions for applying the user's instructions are met. In order to solve this problem a symbolic planner, POPStar, based on a partial order planner (POP) is proposed. The system takes landmarks extracted from user instructions as input, and creates a sequence of actions to operate the robotic cell with minimal makespan. The proposed planner takes advantage of the partial order capabilities of POP to execute actions in parallel and employs a best-first search algorithm to seek the series of actions that lead to a minimal makespan. The proposed planner can also handle robots with multiple grippers, parallel machines as well as scheduling for multiple product types.
|
Page generated in 0.1025 seconds