• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 212
  • 24
  • 18
  • 18
  • 10
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 380
  • 380
  • 308
  • 125
  • 105
  • 68
  • 63
  • 63
  • 57
  • 51
  • 50
  • 47
  • 45
  • 43
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Force and Motion Based Methods for Planar Human-Robot Co-manipulation of Extended Objects

Mielke, Erich Allen 01 April 2018 (has links)
As robots become more common operating in close proximity to people, new opportunities arise for physical human-robot interaction, such as co-manipulation of extended objects. Co-manipulation involves physical interaction between two partners where an object held by both is manipulated in tandem. There is a dearth of viable high degree-of-freedom co-manipulation controllers, especially for extended objects, as well as a lack of information about how human-human teams perform in high degree-of-freedom tasks. One method for creating co-manipulation controllers is to pattern them off of human data. This thesis uses this technique by exploring a previously completed experimental study. The study involved human-human dyads in leader-follower format performing co-manipulation tasks with an extended object in 6 degrees of freedom. Two important tasks performed in this experiment were lateral translation and planar rotation tasks. This thesis focuses on these two tasks because they represent planar motion. Most previous control methods are for 1 or 2 degrees-of-freedom. The study provided information about how human-human dyads perform planar tasks. Most notably, planar tasks generally adhere to minimum-jerk trajectories, and do not minimize interaction forces between users. The study also helped solve the translation versus rotation problem. From the experimental data, torque patterns were discovered at the beginning of the trial that defined intent to translate or rotate. From these patterns, a new method of planar co-manipulation control was developed, called Extended Variable Impedance Control. This is a novel 3 degree-of-freedom method that is applicable to a variety of planar co-manipulation scenarios. Additionally, the data was fed through a Recursive Neural Network. The network takes in a series of motion data and predicts the next step in the series. The predicted data was used as an intent estimate in another novel 3 degree of freedom method called Neural Network Prediction Control. This method is capable of generalizing to 6 degrees of freedom, but is limited in this thesis for comparison with the other method. An experiment, involving 16 participants, was developed to test the capabilities of both controllers for planar tasks. A dual manipulator robot with an omnidirectional base was used in the experiment. The results from the study show that both the Neural Network Prediction Control and Extended Variable Impedance Control controllers performed comparably to blindfolded human-human dyads. A survey given to participants informed us they preferred to use the Extended Variable Impedance Control. These two unique controllers are the major results of this work.
102

Interface de operação para veículos não tripulados

Ferreira, António Sérgio Borges dos Santos January 2010 (has links)
Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 2010
103

Desenvolvimento de técnicas de acompanhamento para interação entre humano e uma equipe de robôs / Development of following techniques for interaction of human and multi-robot teams

Batista, Murillo Rehder 17 December 2018 (has links)
A Robótica tem avançando significativamente nas últimas décadas, chegando a apresentar produtos comerciais, como robôs aspiradores de pó e quadricópteros. Com a integração cada vez maior de robôs em nossa sociedade, mostra-se necessário o desenvolvimento de métodos de interação entre pessoas e robôs para gerenciar o convívio e trabalho mútuo. Existem alguns trabalhos na literatura que consideram o posicionamento socialmente aceitável de um robô, acompanhando um indivíduo, mas não consideram o caso de uma equipe de robôs navegando com uma pessoa considerando aspectos de proxêmica. Nesta tese, são propostas várias estratégias de acompanhamento de um humano por um time de robôs social, que são bioinspiradas por serem baseadas em técnicas de inteligencia coletiva e comportamento social. Experimentos simulados são apresentados visando comparar as técnicas propostas em diversos cenários, destacando-se as vantagens e desvantagens de cada uma delas. Experimentos reais permitiram uma análise da percepção das pessoas em interagir com um ou mais robôs, demonstrando que nenhuma diferença na impressão dos indivíduos foi encontrada. / The field of Robotics have been advancing significantly on the last few decades, presenting commercial products like vacuum cleaning robots and autonomous quadcopter drones. With the increasing presence of robots in our routine, it is necessary to develop human-robot interaction schemes to manage their relationship. Works that deal with a single robot doing a socially acceptable human following behavior are available, but do not consider cases where a robot team walks with a human In this thesis, it is presented a solution for social navigation between a human and a robot team combining socially aware human following techniques with a multirobot escorting method, generating four bioinspired navigation strategies based on collective intelligence and social behavior. Experiments comparing these four strategies on a simulated environment in various scenarios highlighted advantages and disadvantages of each strategy. Moreover, an experiment with real robots was made to investigate the difference on perception of people when interacting with one or three robots, and no difference was found.
104

Robotic System Design For Reshaping Estimated Human Intention In Human-robot Interactions

Durdu, Akif 01 October 2012 (has links) (PDF)
This thesis outlines the methodology and experiments associated with the reshaping of human intention via based on the robot movements in Human-Robot Interactions (HRI). Although works on estimating human intentions are quite well known research areas in the literature, reshaping intentions through interactions is a new significant branching in the field of human-robot interaction. In this thesis, we analyze how previously estimated human intentions change based on his/her actions by cooperating with mobile robots in a real human-robot environment. Our approach uses the Observable Operator Models (OOMs) and Hidden Markov Models (HMMs) designed for the intelligent mobile robotic systems, which consists of two levels: the low-level tracks the human while the high-level guides the mobile robots into moves that aim to change intentions of individuals in the environment. In the low level, postures and locations of the human are monitored by applying image processing methods. The high level uses an algorithm which includes learned OOM models or HMM models to estimate human intention and decision making system to reshape the previously estimated human intention. Through this thesis, OOMs are started to be used at the human-robot interaction applications for first time. This two-level system is tested on video frames taken from a real human-robot environment. The results obtained using the proposed approaches are compared according to performance towards the degree of reshaping the detected intentions.
105

Human-Robot Interaction and Mapping with a Service Robot : Human Augmented Mapping

Topp, Elin Anna January 2008 (has links)
An issue widely discussed in robotics research is the ageing society with its consequences for care-giving institutions and opportunities for developments in the area of service robots and robot companions. The general idea of using robotic systems in a personal or private context to support an independent way of living not only for the elderly but also for the physically impaired is pursued in different ways, ranging from socially oriented robotic pets to mobile assistants. Thus, the idea of the personalised general service robot is not too far fetched. Crucial for such a service robot is the ability to navigate in its working environment, which has to be assumed an arbitrary domestic or office-like environment that is shared with human users and bystanders. With methods developed and investigated in the field of simultaneous localisation and mapping it has become possible for mobile robots to explore and map an unknown environment, while they can stay localised with respect to their starting point and the surroundings. These approaches though do not consider the representation of the environment that is used by humans to refer to particular places. Robotic maps are often metric representations of features that can be obtained from sensory data. Humans have a more topological, in fact partially hierarchical way of representing environments. Especially for the communication between a user and her personal robot it is thus necessary to provide a link between the robotic map and the human understanding of the robot's workspace. The term Human Augmented Mapping is used for a framework that allows to integrate a robotic map with human concepts. Communication about the environment can thus be facilitated. By assuming an interactive setting for the map acquisition process it is possible for the user to influence the process significantly. Personal preferences can be made part of the environment representation that is acquired by the robot. Advantages become also obvious for the mapping process itself, since in an interactive setting the robot can ask for information and resolve ambiguities with the help of the user. Thus, a scenario of a ``guided tour'' in which a user can ask a robot to follow and present the surroundings is assumed as the starting point for a system for the integration of robotic mapping, interaction and human environment representations. A central point is the development of a generic, partially hierarchical environment model, that is applied in a topological graph structure as part of an overall experimental Human Augmented Mapping system implementation. Different aspects regarding the representation of entities of the spatial concepts used in this hierarchical model, particularly considering regions, are investigated. The proposed representation is evaluated both as description of delimited regions and for the detection of transitions between them. In three user studies different aspects of the human-robot interaction issues of Human Augmented Mapping are investigated and discussed. Results from the studies support the proposed model and representation approaches and can serve as basis for further studies in this area. / QC 20100914
106

Fysisk, känslomässig och social interaktion : En analys av upplevelserna av robotsälen Paro hos kognitivt funktionsnedsatta och på äldreboende / Tangible, affective and social interaction : Analysing experiences of Paro the robot seal in elderly care and among cognitively disabled

Nobelius, Jörgen January 2011 (has links)
This field study examined how elderly and cognitively disabled people used and experienced a social companion robot. The following pages explores the questions: Which are the physical, social and affective qualities during the interaction? The aim was to through observations see how qualities of interaction could activate different forms of behavior. The results show that motion, sound and the eyes together created communicative and emotional changes for users who felt joy and were willing to share the activity with others. The robot stimulated to some extent users to create their own imaginative experiences but often failed to involve user or group for a long time and was also considered too large and heavy to handle. / Denna fältstudie undersökte hur äldre och kognitivt funktionsnedsatta personer använde och upplevde en social robot. Följande sidor utforskar frågorna: Vilka fysiska, sociala och affektiva kvaliteter finns i interaktionen? Målet var att genom observationer se hur kvaliteterna i interaktionen kunde aktivera olika typer av beteenden. Resultatet visar att rörelse, ljud och ögon tillsammans skapade kommunikativa och känslomässiga förändringar hos användarna som visade glädje och som gärna delade upplevelsen med andra. Roboten stimulerade till viss del användarna att skapa egna fantasifulla upplevelser men lyckades inte ofta involvera användare eller grupp under någon längre tid och ansågs även vara för stor och tung att hantera.
107

Determining User Requirements Of First-of-a-kind Interactive Systems: An Implementation Of Cognitive Analysis On Human Robot Interaction

Acikgoz Kopanoglu, Teksin 01 March 2011 (has links) (PDF)
Although, user requirements are critical for the conformance of a system (or a product) design with the user, they may be appraised late in the development processes. Hence, resources and schedules may be planned with the limitations of system oriented requirements. Therefore, late discovered critical feedbacks from the users may not be reflected to the requirements or the design. The focus of this thesis is how to determine the user requirements of first-of-a-kind interactive systems, early in the development process. First-of-a-kind interactive systems differentiate from others for not having experienced users and subject matter experts. Cognitive analysis techniques are investigated with the aim to discover and integrate user requirements early in the development processes of first-of-a-kind systems. Hybrid Cognitive Task Analysis, one of the cognitive analysis techniques, is carried out for the determination of user requirements of a system in the Human Robot Interaction area. Therefore, while exemplifying the methodology, its competency and correspondence with the domain is observed.
108

Towards the human-centered design of everyday robots

Sung, Ja-Young 01 April 2011 (has links)
The recent advancement of robotic technology brings robots closer to assisting us in our everyday spaces, providing support for healthcare, cleaning, entertaining and other tasks. In this dissertation, I refer to these robots as everyday robots. Scholars argue that the key to successful human acceptance lies in the design of robots that have the ability to blend into everyday activities. A challenge remains; robots are an autonomous technology that triggers multi-faceted interactions: physical, intellectual, social and emotional, making their presence visible and even obtrusive. These challenges need more than technological advances to be resolved; more human-centered approaches are required in the design. However to date, little is known about how to support that human-centered design of everyday robots. In this thesis, I address this gap by introducing an initial set of design guidelines for everyday robots. These guidelines are based on four empirical studies undertaken to identify how people live with robots in the home. These studies mine insights about what interaction attributes of everyday robots elicit positive or negative user responses. The guidelines were deployed in the development of one type of everyday robot: a senior-care robot called HomeMate. It shows that the guidelines become useful during the early development process by helping designers and robot engineers to focus on how social and emotional values of end-users influence the design of the technical functions required. Overall, this thesis addresses a question how we can support the design of everyday robots to become more accepted by users. I respond to this question by proposing a set of design guidelines that account for lived experiences of robots in the home, which ultimately can improve the adoption and use of everyday robots.
109

An integrative framework of time-varying affective robotic behavior

Moshkina, Lilia V. 04 April 2011 (has links)
As robots become more and more prevalent in our everyday life, making sure that our interactions with them are natural and satisfactory is of paramount importance. Given the propensity of humans to treat machines as social actors, and the integral role affect plays in human life, providing robots with affective responses is a step towards making our interaction with them more intuitive. To the end of promoting more natural, satisfying and effective human-robot interaction and enhancing robotic behavior in general, an integrative framework of time-varying affective robotic behavior was designed and implemented on a humanoid robot. This psychologically inspired framework (TAME) encompasses 4 different yet interrelated affective phenomena: personality Traits, affective Attitudes, Moods and Emotions. Traits determine consistent patterns of behavior across situations and environments and are generally time-invariant; attitudes are long-lasting and reflect likes or dislikes towards particular objects, persons, or situations; moods are subtle and relatively short in duration, biasing behavior according to favorable or unfavorable conditions; and emotions provide a fast yet short-lived response to environmental contingencies. The software architecture incorporating the TAME framework was designed as a stand-alone process to promote platform-independence and applicability to other domains. In this dissertation, the effectiveness of affective robotic behavior was explored and evaluated in a number of human-robot interaction studies with over 100 participants. In one of these studies, the impact of Negative Mood and emotion of Fear was assessed in a mock-up search-and-rescue scenario, where the participants found the robot expressing affect more compelling, sincere, convincing and "conscious" than its non-affective counterpart. Another study showed that different robotic personalities are better suited for different tasks: an extraverted robot was found to be more welcoming and fun for a task as a museum robot guide, where an engaging and gregarious demeanor was expected; whereas an introverted robot was rated as more appropriate for a problem solving task requiring concentration. To conclude, multi-faceted robotic affect can have far-reaching practical benefits for human-robot interaction, from making people feel more welcome where gregariousness is expected to making unobtrusive partners for problem solving tasks to saving people's lives in dangerous situations.
110

Joint attention in human-robot interaction

Huang, Chien-Ming 07 July 2010 (has links)
Joint attention, a crucial component in interaction and an important milestone in human development, has drawn a lot of attention from the robotics community recently. Robotics researchers have studied and implemented joint attention for robots for the purposes of achieving natural human-robot interaction and facilitating social learning. Most previous work on the realization of joint attention in the robotics community has focused only on responding to joint attention and/or initiating joint attention. Responding to joint attention is the ability to follow another's direction of gaze and gestures in order to share common experience. Initiating joint attention is the ability to manipulate another's attention to a focus of interest in order to share experience. A third important component of joint attention is ensuring, where by the initiator ensures that the responders has changed their attention. However, to the best of our knowledge, there is no work explicitly addressing the ability for a robot to ensure that joint attention is reached by interacting agents. We refer to this ability as ensuring joint attention and recognize its importance in human-robot interaction. We propose a computational model of joint attention consisting of three parts: responding to joint attention, initiating joint attention, and ensuring joint attention. This modular decomposition is supported by psychological findings and matches the developmental timeline of humans. Infants start with the skill of following a caregiver's gaze, and then they exhibit imperative and declarative pointing gestures to get a caregiver's attention. Importantly, as they aged and social skills matured, initiating actions often come with an ensuring behavior that is to look back and forth between the caregiver and the referred object to see if the caregiver is paying attention to the referential object. We conducted two experiments to investigate joint attention in human-robot interaction. The first experiment explored effects of responding to joint attention. We hypothesize that humans will find that robots responding to joint attention are more transparent, more competent, and more socially interactive. Transparency helps people understand a robot's intention, facilitating a better human-robot interaction, and positive perception of a robot improves the human-robot relationship. Our hypotheses were supported by quantitative data, results from questionnaire, and behavioral observations. The second experiment studied the importance of ensuring joint attention. The results confirmed our hypotheses that robots that ensure joint attention yield better performance in interactive human-robot tasks and that ensuring joint attention behaviors are perceived as natural behaviors by humans. The findings suggest that social robots should use ensuring joint attention behaviors.

Page generated in 0.0617 seconds