• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 219
  • 24
  • 18
  • 18
  • 10
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 388
  • 388
  • 315
  • 127
  • 107
  • 71
  • 66
  • 63
  • 58
  • 53
  • 50
  • 49
  • 46
  • 45
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Evaluating Human-robot Implicit Communication Through Human-human Implicit Communication

Richardson, Andrew Xenos 01 January 2012 (has links)
Human-Robot Interaction (HRI) research is examining ways to make human-robot (HR) communication more natural. Incorporating natural communication techniques is expected to make HR communication seamless and more natural for humans. Humans naturally incorporate implicit levels of communication, and including implicit communication in HR communication should provide tremendous benefit. The aim for this work was to evaluate a model for humanrobot implicit communication. Specifically, the primary goal for this research was to determine whether humans can assign meanings to implicit cues received from autonomous robots as they do for identical implicit cues received from humans. An experiment was designed to allow participants to assign meanings to identical, implicit cues (pursuing, retreating, investigating, hiding, patrolling) received from humans and robots. Participants were tasked to view random video clips of both entity types, label the implicit cue, and assign a level of confidence in their chosen answer. Physiological data was tracked during the experiment using an electroencephalogram and eye-tracker. Participants answered workload and stress measure questionnaires following each scenario. Results revealed that participants were significantly more accurate with human cues (84%) than with robot cues (82%), however participants were highly accurate, above 80%, for both entity types. Despite the high accuracy for both types, participants remained significantly more confident in answers for humans (6.1) than for robots (5.9) on a confidence scale of 1 - 7. Subjective measures showed no significant differences for stress or mental workload across entities. Physiological measures were not significant for the engagement index across v entity, but robots resulted in significantly higher levels of cognitive workload for participants via the index of cognitive activity. The results of this study revealed that participants are more confident interpreting human implicit cues than identical cues received from a robot. However, the accuracy of interpreting both entities remained high. Participants showed no significant difference in interpreting different cues across entity as well. Therefore, much of the ability of interpreting an implicit cue resides in the actual cue rather than the entity. Proper training should boost confidence as humans begin to work alongside autonomous robots as teammates, and it is possible to train humans to recognize cues based on the movement, regardless of the entity demonstrating the movement.
32

Decision shaping and strategy learning in multi-robot interactions

Valtazanos, Aris January 2013 (has links)
Recent developments in robot technology have contributed to the advancement of autonomous behaviours in human-robot systems; for example, in following instructions received from an interacting human partner. Nevertheless, increasingly many systems are moving towards more seamless forms of interaction, where factors such as implicit trust and persuasion between humans and robots are brought to the fore. In this context, the problem of attaining, through suitable computational models and algorithms, more complex strategic behaviours that can influence human decisions and actions during an interaction, remains largely open. To address this issue, this thesis introduces the problem of decision shaping in strategic interactions between humans and robots, where a robot seeks to lead, without however forcing, an interacting human partner to a particular state. Our approach to this problem is based on a combination of statistical modeling and synthesis of demonstrated behaviours, which enables robots to efficiently adapt to novel interacting agents. We primarily focus on interactions between autonomous and teleoperated (i.e. human-controlled) NAO humanoid robots, using the adversarial soccer penalty shooting game as an illustrative example. We begin by describing the various challenges that a robot operating in such complex interactive environments is likely to face. Then, we introduce a procedure through which composable strategy templates can be learned from provided human demonstrations of interactive behaviours. We subsequently present our primary contribution to the shaping problem, a Bayesian learning framework that empirically models and predicts the responses of an interacting agent, and computes action strategies that are likely to influence that agent towards a desired goal. We then address the related issue of factors affecting human decisions in these interactive strategic environments, such as the availability of perceptual information for the human operator. Finally, we describe an information processing algorithm, based on the Orient motion capture platform, which serves to facilitate direct (as opposed to teleoperation-mediated) strategic interactions between humans and robots. Our experiments introduce and evaluate a wide range of novel autonomous behaviours, where robots are shown to (learn to) influence a variety of interacting agents, ranging from other simple autonomous agents, to robots controlled by experienced human subjects. These results demonstrate the benefits of strategic reasoning in human-robot interaction, and constitute an important step towards realistic, practical applications, where robots are expected to be not just passive agents, but active, influencing participants.
33

Implementering av människa-robot samarbetscell i en labbmiljö : Implementation av människa-robot samarbete / Implementation of human-robot collaboration in a lab environment : Implementation of human-robot collaboration

Wiemann, Marcus January 2019 (has links)
I ett tidigare genomfört examensarbete gjordes en förstudie på hur en människa-robot samarbetscell skulle fungera i en laboratoriemiljö. I Arvid Bobergs ”HRC Implementation in laboratory environment” skulle cellen tas fram på uppdrag av Eurofins för att arbeta med kemikalie- och mikrobiologiska analyser inom jordbruk, mat och miljö. För att verifiera de lösningsförslag som togs fram skulle en implementering behöva utformas i en fysisk miljö.  Projektets huvudsyfte var att ta fram en samarbetscell som skulle utföra arbetsuppgifter i en labbmiljö. För detta ändamål har en station, arbetsmoment och komponenter tagits fram och implementerats på ASSAR. Stationen har programmerats för att visa upp möjligheterna som roboten har att erbjuda i en samarbetscell med hjälp av ABB RobotStudio och online programmering.  Valet av robot var om möjligt att använda sig av ABB:s YuMi robot. Detta för att det var roboten som förstudien som arbetet bygger på använde sig av i dess modell samt byggde sin teori på och eftersom förstudiens arbete ligger till grunden för detta projekt.  Implementationen av stationen har genomförts i steg för att kunna testa olika upplägg och erhålla bättre förståelse av robotens egenskaper och vad den är kapabel att utföra i förhållande till räckvidd och flexibilitet. För att skapa de mer avancerade funktionerna i programmet användes offline programmering i ABB RobotStudio kombinerat med hjälp av online programmering. Funktionerna blir för avancerade för att skriva i en TeachPendant eftersom det blir långa rader med kod för att skapa de avancerade funktioner som roboten använder sig av för att utföra sina arbetsuppgifter.  Arbetet på ASSAR har lett till att ett flertal olika lösningar har tagits fram och tänkts över tills ett koncept valts och implementerats på ASSAR. Detta i form av en samarbetscell som visar upp olika funktioner för att utföra arbetsuppgifter i en labbmiljö med hjälp av YuMi-roboten ifrån ABB och ett arbetsbord som skapats under projektets gång.  Projektet har uppnått flertalet uppsatta mål för arbetet men några har inte uppnåtts, detta på grund av förseningar som uppkommit under projektets gång. Förseningarna har gjort att arbetsgången ändrats och det resultat som författaren försökt uppnå förändrats för att ta fram en samarbetscell och ge ett resultat åt projektet. / In a previous final year project, a study was carried out on how a robotic collaborative cell would work in a laboratory environment. In Arvid Bobergs "HRC Implementation in laboratory environment" the cell would be developed on behalf of Eurofins to work with chemical and microbiological analyses in agriculture, food and environment. To verify the suggested solutions, an implementation would need to be designed in a physical environment. The main purpose of the project was to develop a collaborative cell that would perform tasks in a lab environment. For this purpose, a station, work operations and components have been developed and implemented at ASSAR. The station has been programmed to showcase the possibilities the robot has to offer in a collaborative cell with the help of ABB Robot Studio and online programming. The choice of the robot was if possible, to make use of ABB's YuMi robot. This is because it was the robot that the pre-study that the work is based on used in its model and built its theory on and because the work of the feasibility study is the foundation of this project. The implementation of the station has been completed in steps to be able to test different structure and obtain a better understanding of the robot's characteristics and what it is capable to perform in relation to range and flexibility. To create the more advanced features of the program was used offline programming in ABB Robot Studio combined with the help of online programming. The functions become too advanced to write in a TeachPendant because there will be long lines of code to create the advanced functions that the robot uses to perform its tasks. The work at ASSAR has led to several different solutions being developed and thought over until a concept has been chosen and implemented at ASSAR. This in the form of a collaborative cell that showcases various functions to perform tasks in a lab environment using the YuMi robot from ABB and a worktable created during the project. The project has achieved several goals for the work, but some have not been achieved, because of delays that have arisen during the course of the project. The delays have made the workflow change and the result that the author has tried to achieve has changed to develop a collaborative cell and give a result to the project.
34

A developmental model of trust in humanoid robots

Patacchiola, Massimiliano January 2018 (has links)
Trust between humans and artificial systems has recently received increased attention due to the widespread use of autonomous systems in our society. In this context trust plays a dual role. On the one hand it is necessary to build robots that are perceived as trustworthy by humans. On the other hand we need to give to those robots the ability to discriminate between reliable and unreliable informants. This thesis focused on the second problem, presenting an interdisciplinary investigation of trust, in particular a computational model based on neuroscientific and psychological assumptions. First of all, the use of Bayesian networks for modelling causal relationships was investigated. This approach follows the well known theory-theory framework of the Theory of Mind (ToM) and an established line of research based on the Bayesian description of mental processes. Next, the role of gaze in human-robot interaction has been investigated. The results of this research were used to design a head pose estimation system based on Convolutional Neural Networks. The system can be used in robotic platforms to facilitate joint attention tasks and enhance trust. Finally, everything was integrated into a structured cognitive architecture. The architecture is based on an actor-critic reinforcement learning framework and an intrinsic motivation feedback given by a Bayesian network. In order to evaluate the model, the architecture was embodied in the iCub humanoid robot and used to replicate a developmental experiment. The model provides a plausible description of children's reasoning that sheds some light on the underlying mechanism involved in trust-based learning. In the last part of the thesis the contribution of human-robot interaction research is discussed, with the aim of understanding the factors that influence the establishment of trust during joint tasks. Overall, this thesis provides a computational model of trust that takes into account the development of cognitive abilities in children, with a particular emphasis on the ToM and the underlying neural dynamics.
35

Adaptive Optimal Control in Physical Human-Robot Interaction

January 2019 (has links)
abstract: What if there is a way to integrate prosthetics seamlessly with the human body and robots could help improve the lives of children with disabilities? With physical human-robot interaction being seen in multiple aspects of life, including industry, medical, and social, how these robots are interacting with human becomes even more important. Therefore, how smoothly the robot can interact with a person will determine how safe and efficient this relationship will be. This thesis investigates adaptive control method that allows a robot to adapt to the human's actions based on the interaction force. Allowing the relationship to become more effortless and less strained when the robot has a different goal than the human, as seen in Game Theory, using multiple techniques that adapts the system. Few applications this could be used for include robots in physical therapy, manufacturing robots that can adapt to a changing environment, and robots teaching people something new like dancing or learning how to walk after surgery. The experience gained is the understanding of how a cost function of a system works, including the tracking error, speed of the system, the robot’s effort, and the human’s effort. Also, this two-agent system, results into a two-agent adaptive impedance model with an input for each agent of the system. This leads to a nontraditional linear quadratic regulator (LQR), that must be separated and then added together. Thus, creating a traditional LQR. This new experience can be used in the future to help build better safety protocols on manufacturing robots. In the future the knowledge learned from this research could be used to develop technologies for a robot to allow to adapt to help counteract human error. / Dissertation/Thesis / Masters Thesis Engineering 2019
36

From Qualitative to Quantitative: Supporting Robot Understanding in Human-Interactive Path Planning

Yi, Daqing 01 August 2016 (has links)
Improvements in robot autonomy are changing human-robot interaction from low-level manipulation to high-level task-based collaboration. When a robot can independently and autonomously executes tasks, a human in a human-robot team acts as a collaborator or task supervisor instead of a tele-operator. When applying this to planning paths for a robot's motion, it is very important that the supervisor's qualitative intent is translated into aquantitative model so that the robot can produce a desirable consequence. In robotic path planning, algorithms can transform a human's qualitative requirement into a robot's quantitative model so that the robot behavior satisfies the human's intent. In particular, algorithms can be created that allow a human to express multi-objective and topological preferences, and can be built to use verbal communication. This dissertation presents a series of robot motion-planning algorithms, each of which is designed to support some aspect of a human's intent. Specifically, we present algorithms for the following problems: planning with a human-motion constraint, planning with a topological requirement, planning with multiple objectives, and creating models of constraints, requirements, and objectives from verbal instructions. These algorithms create a set of robot behaviors that support flexible decision-making over a range of complex path-planning tasks.
37

From HCI to HRI : Designing Interaction for a Service Robot

Hüttenrauch, Helge January 2006 (has links)
Service robots are mobile, embodied artefacts that operate in co presence with their users. This is a challenge for human-robot interaction (HRI) design. The robot’s interfaces must support users in understanding the system’s current state and possible next actions. One aspect in the design for such interaction is to understand users’ preferences and expectations by involving them in the design process. This thesis takes a user-centered design (UCD) perspective and tries to understand the different user roles that exist in service robotics in order to consider possible design implications. Another important aim in the thesis is to understand the spatial management that occurs in face-to-face encounters between humans and robotic systems. The Cero robot is an office “fetch-and-carry” robot that supports a user in the transportation of light objects in an office environment. The iterative, user-centered design of the graphical-user interface (GUI) for the Cero robot is presented in Paper I. It is based upon the findings from multiple prototype design- and evaluation iterations. The GUI is one of the robot’s interfacing components, i.e., it is to be seen in the overall interplay of the robot’s physical design and other interface modalities developed in parallel with the GUI. As interaction strategy for the GUI, a graphical representation based upon simplification of the graphical elements as well as hiding the robot system’s complexity in sensing and mission execution is recommended. The usage of the Cero robot by a motion-impaired user over a period of three months is presented in Paper II. This longitudinal user study aims to gain insights into the daily usage of such an assistive robot. This approach is complementary to the described GUI design and development process as it allows empirically investigating situated use of the Cero robot as novel service application over a longer period of time with the provided interfaces. Findings from this trial show that the robot and its interfaces provide a benefit to the user in the transport of light objects, but also implies increased independence. The long-term study also reveals further aspects of the Cero robot system usage as part of a workplace setting, including the social context that such a mobile, embodied system needs to be designed for. During the long-term user study, bystanders in the operation area of the Cero robot were observed in their attempt to interact with it. To understand better how such bystander users may shape the interaction with a service robot system, an experimental study investigates this special type and role of robot users in Paper III. A scenario in which the Cero robot addresses and asks invited trial subjects for a cup of coffee is described. The findings show that the level of occupation significantly influences bystander users’ willingness to assist the Cero robot with its request. The joint handling of space is an important part of HRI, as both users and service robots are mobile and often co-present during interaction. To inform the development of future robot locomotion behaviors and interaction design strategies, a Wizard-of Oz (WOZ) study is presented in Paper IV that explores the role of posture and positioning in HRI. The interpersonal distances and spatial formations that were observed during this trial are quantified and analyzed in a joint interaction task between a robot and its users in Paper V. Findings show that a face-to-face spatial formation and a distance between ~46 to ~122 cm is dominant while initiating a robot mission or instructing it about an object or place. Paper VI investigates another aspect on the role of spatial management in the joint task between a robot and its user based upon the study described in Papers IV and V. Taking the dynamics of interaction into account, the findings are that users structure their activities with the robot and that this organizing is observable as small movements in interaction. These small adaptations in posture and orientation signify the transition between different episodes of interaction and prepare for the next interaction exchange in the shared space. The understanding of these spatial management behaviors allow designing human-robot interaction based upon such awareness and active handling of space as a structuring interaction element. / QC 20100617
38

A Novel Approach for Performance Assessment of Human-Robotic Interaction

Abou Saleh, Jamil 16 March 2012 (has links)
Robots have always been touted as powerful tools that could be used effectively in a number of applications ranging from automation to human-robot interaction. In order for such systems to operate adequately and safely in the real world, they must be able to perceive, and must have abilities of reasoning up to a certain level. Toward this end, performance evaluation metrics are used as important measures. This research work is intended to be a further step toward identifying common metrics for task-oriented human-robot interaction. We believe that within the context of human-robot interaction systems, both humans' and robots' actions and interactions (jointly and independently) can significantly affect the quality of the accomplished task. As such, our goal becomes that of providing a foundation upon which we can assess how well the human and the robot perform as a team. Thus, we propose a generic performance metric to assess the performance of the human-robot team, where one or more robots are involved. Sequential and parallel robot cooperation schemes with varying levels of task dependency are considered, and the proposed performance metric is augmented and extended to accommodate such scenarios. This is supported by some intuitively derived mathematical models and some advanced numerical simulations. To efficiently model such a metric, we propose a two-level fuzzy temporal model to evaluate and estimate the human trust in automation, while collaborating and interacting with robots and machines to complete some tasks. Trust modelling is critical, as it directly influences the interaction time that should be directly and indirectly dedicated toward interacting with the robot. Another fuzzy temporal model is also presented to evaluate the human reliability during interaction time. A significant amount of research work stipulates that system failures are due almost equally to humans as to machines, and therefore, assessing this factor in human-robot interaction systems is crucial. The proposed framework is based on the most recent research work in the areas of human-machine interaction and performance evaluation metrics. The fuzzy knowledge bases are further updated by implementing an application robotic platform where robots and users interact via semi-natural language to achieve tasks with varying levels of complexity and completion rates. User feedback is recorded and used to tune the knowledge base where needed. This work intends to serve as a foundation for further quantitative research to evaluate the performance of the human-robot teams in achievement of collective tasks.
39

Initial steps toward human augmented mapping

Topp, Elin Anna January 2006 (has links)
<p>With the progress in research and product development humans and robots get more and more close to each other and the idea of a personalised general service robot is not too far fetched. Crucial for such a service robot is the ability to navigate in its working environment. The environment has to be assumed an arbitrary domestic or office-like environment that has to be shared with human users and bystanders. With methods developed and investigated in the field of simultaneous localisation and mapping it has become possible for mobile robots to explore and map an unknown environment, while they can stay localised with respect to their starting point and the surroundings. These approaches though do not consider the representation of the environment that is used by humans to refer to particular places. Robotic maps are often metric representations of features that could be obtained from sensory data. Humans have a more topological, in fact partially hierarchical way of representing environments. Especially for the communication between a user and her personal robot it is thus necessary to provide a link between the robotic map and the human understanding of the robot's workspace.</p><p>The term Human Augmented Mapping is used for a framework that allows to integrate a robotic map with human concepts. Communication about the environment can thus be facilitated. By assuming an interactive setting for the map acquisition process it is possible for the user to influence the process significantly. Personal preferences can be made part of the environment representation that the robot acquires. Advantages become also obvious for the mapping process itself, since in an interactive setting the robot could ask for information and resolve ambiguities with the help of the user. Thus, a scenario of a "guided tour" in which a user can ask a robot to follow and present the surroundings is assumed as the starting point for a system for the integration of robotic mapping, interaction and human environment representations.</p><p>Based on results from robotics research, psychology, human-robot interaction and cognitive science a general architecture for a system for Human Augmented Mapping is presented. This architecture combines a hierarchically organised robotic mapping approach with interaction abilities with the help of a high-level environment model. An initial system design and implementation that combines a tracking and following approach with a mapping system is described. Observations from a pilot study in which this initial system was used successfully are reported and support the assumptions about the usefulness of the environment model that is used as the link between robotic and human representation.</p>
40

Understanding Anthropomorphism in the Interaction Between Users and Robots

Zlotowski, Jakub Aleksander January 2015 (has links)
Anthropomorphism is a common phenomenon when people attribute human characteristics to non-human objects. It plays an important role in acceptance of robots in natural human environments. Various studies in the field of Human-Robot Interaction (HRI) show that there are various factors that can affect the extent to which a robot is anthropomorphized. However, our knowledge of this phenomenon is segmented, as there is a lack of a coherent model of anthropomorphism that could consistently explain these findings. A robot should be able to adjust its level of anthropomorphism to a level that can optimize its task performance. In order to do that, robotic system designers must know which characteristics affect the perception of robots' anthropomorphism. Currently, existing models of anthropomorphism emphasize the importance of the context and perceiver in this phenomenon, but provide little guidelines regarding the factors of a perceived object that are affecting it. The proposed reverse process to anthropomorphization is known as dehumanization. In the recent years research in social psychology has found which characteristics are deprived from people who are perceived as subhumans or are objectified. Furthermore, the process of dehumanization is two dimensional rather than unidimensional. This thesis discusses a model of anthropomorphism that uses characteristics from both dimensions of dehumanization and those relating to robots' physical appearance to affect the anthropomorphism of a robot. Furthermore, involvement of implicit and explicit processes in anthropomorphization are discussed. In this thesis I present five empirical studies that were conducted to explore anthropomorphism in HRI. Chapter 3 discusses development and validation of a cognitive measurement of humanlikeness using the magnitude of the inversion effect. Although robot stimuli were processed more similarly to human stimuli rather than objects and induced the inversion effect, the results suggest that this measure has limited potential for measuring humanlikeness due to the low variance that it can explain. The second experiment, presented in Chapter 4 explored the involvement of Type I and Type II processing in anthropomorphism. The main findings of this study suggest that anthropomorphism is not a result of a dual-process and self-reports have a potential to be suitable measurement tools of anthropomorphism. Chapter 5 presents the first empirical work on the dimensionality of anthropomorphism. Only perceived emotionality of a robot, but not its perceived intelligence, affects its anthropomorphization. This finding is further supported by a follow up experiment, presented in Chapter 6, that shows that Human Uniqueness dimension is less relevant for a robot's anthropomorphiazability than Human Nature (HN) dimension. Intentionality of a robot did not result in its higher anthropomorphizability. Furthermore, this experiment showed that humanlike appearance of a robot is not linearly related with its anthropomorphism during HRI. The lack of linear relationship between humanlike appearance and attribution of HN traits to a robot during HRI is further supported by the study described in Chapter 7. This last experiment shows also that another factor of HN, sociability, affects the extent to which a robot is anthropomorphized and therefore the relevance of HN dimension in the process of anthropomorphization. This thesis elaborates on the process of anthropomorphism as an important factor affecting HRI. Without fully understanding the process itself and what factors make robots to be anthropomorphized it is hard to measure the impact of anthropomorphism on HRI. It is hoped that understanding anthropomorphism in HRI will make it possible to design interactions in a way that optimizes the benefits of that phenomenon for an interaction.

Page generated in 0.048 seconds