• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 212
  • 24
  • 18
  • 18
  • 9
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 379
  • 379
  • 307
  • 125
  • 104
  • 68
  • 63
  • 63
  • 57
  • 51
  • 50
  • 46
  • 45
  • 43
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Pose Imitation Constraints For Kinematic Structures

Glebys T Gonzalez (14486934) 09 February 2023 (has links)
<p> </p> <p>The usage of robots has increased in different areas of society and human work, including medicine, transportation, education, space exploration, and the service industry. This phenomenon has generated a sudden enthusiasm to develop more intelligent robots that are better equipped to perform tasks in a manner that is equivalently good as those completed by humans. Such jobs require human involvement as operators or teammates since robots struggle with automation in everyday settings. Soon, the role of humans will be far beyond users or stakeholders and include those responsible for training such robots. A popular teaching form is to allow robots to mimic human behavior. This method is intuitive and natural and does not require specialized knowledge of robotics. While there are other methods for robots to complete tasks effectively, collaborative tasks require mutual understanding and coordination that is best achieved by mimicking human motion. This mimicking problem has been tackled through skill imitation, which reproduces human-like motion during a task shown by a trainer. Skill imitation builds on faithfully replicating the human pose and requires two steps. In the first step, an expert's demonstration is captured and pre-processed, and motion features are obtained; in the second step, a learning algorithm is used to optimize for the task. The learning algorithms are often paired with traditional control systems to transfer the demonstration to the robot successfully. However, this methodology currently faces a generalization issue as most solutions are formulated for specific robots or tasks. The lack of generalization presents a problem, especially as the frequency at which robots are replaced and improved in collaborative environments is much higher than in traditional manufacturing. Like humans, we expect robots to have more than one skill and the same skills to be completed by more than one type of robot. Thus, we address this issue by proposing a human motion imitation framework that can be efficiently computed and generalized for different kinematic structures (e.g., different robots).</p> <p> </p> <p>This framework is developed by training an algorithm to augment collaborative demonstrations, facilitating the generalization to unseen scenarios. Later, we create a model for pose imitation that converts human motion to a flexible constraint space. This space can be directly mapped to different kinematic structures by specifying a correspondence between the main human joints (i.e., shoulder, elbow, wrist) and robot joints. This model permits having an unlimited number of robotic links between two assigned human joints, allowing different robots to mimic the demonstrated task and human pose. Finally, we incorporate the constraint model into a reward that informs a Reinforcement Learning algorithm during optimization. We tested the proposed methodology in different collaborative scenarios. Thereafter, we assessed the task success rate, pose imitation accuracy, the occlusion that the robot produces in the environment, the number of collisions, and finally, the learning efficiency of the algorithm.</p> <p> </p> <p>The results show that the proposed framework creates effective collaboration in different robots and tasks.</p>
72

Wie kommt die Robotik zum Sozialen? Epistemische Praktiken der Sozialrobotik.

Bischof, Andreas 01 March 2017 (has links) (PDF)
In zahlreichen Forschungsprojekten wird unter Einsatz großer finanzieller und personeller Ressourcen daran gearbeitet, dass Roboter die Fabrikhallen verlassen und Teil von Alltagswelten wie Krankenhäusern, Kindergärten und Privatwohnungen werden. Die Konstrukteurinnen und Konstrukteure stehen dabei vor einer nicht-trivialen Herausforderung: Sie müssen die Ambivalenzen und Kontingenzen alltäglicher Interaktion in die diskrete Sprache der Maschinen übersetzen. Wie sie dieser Herausforderung begegnen, welche Muster und Lösungen sie heranziehen und welche Implikationen für die Verwendung von Sozialrobotern dabei gelegt werden, ist der Gegenstand des Buches. Auf der Suche nach der Antwort, was Roboter sozial macht, hat Andreas Bischof Forschungslabore und Konferenzen in Europa und Nordamerika besucht und ethnografisch erforscht. Zu den wesentlichen Ergebnissen dieser Studie gehört die Typologisierung von Forschungszielen in der Sozialrobotik, eine epistemische Genealogie der Idee des Roboters in Alltagswelten, die Rekonstruktion der Bezüge zu 'echten' Alltagswelten in der Sozialrobotik-Entwicklung und die Analyse dreier Gattungen epistemischer Praktiken, derer sich die Ingenieurinnen und Ingenieure bedienen, um Roboter sozial zu machen.
73

Human-in-the-loop control for cooperative human-robot tasks

Chipalkatty, Rahul 29 March 2012 (has links)
Even with the advance of autonomous robotics and automation, many automated tasks still require human intervention or guidance to mediate uncertainties in the environment or to execute the complexities of a task that autonomous robots are not yet equipped to handle. As such, robot controllers are needed that utilize the strengths of both autonomous agents, adept at handling lower level control tasks, and humans, superior at handling higher-level cognitive tasks. To address this need, we develop a control theoretic framework that seeks to incorporate user commands such that user intention is preserved while an automated task is carried out by the controller. This is a novel approach in that system theoretic tools allow for analytic guarantees of feasibility and convergence to goal states which naturally lead to varying levels of autonomy. We develop a model predictive controller that takes human input, infers human intent, then applies a control that minimizes deviations from the intended human control while ensuring that the lower-level automated task is being completed. This control framework is then evaluated in a human operator study involving a shared control task with human guidance of a mobile robot for navigation. These theoretical and experimental results lay the foundation for applying this control method for human-robot cooperative control to actual human-robot tasks. Specifically, the control is applied to a Urban Search and Rescue robot task where the shared control of a quadruped rescue robot is needed to ensure static stability during human-guided leg placements in uneven terrain. This control framework is also extended to a multiple user and multiple agent system where the human operators control multiple agents such that the agents maintain a formation while allowing the human operators to manipulate the shape of the formation. User studies are also conducted to evaluate the control in multiple operator scenarios.
74

Haptic interaction between naive participants and mobile manipulators in the context of healthcare

Chen, Tiffany L. 22 May 2014 (has links)
Human-scale mobile robots that manipulate objects (mobile manipulators) have the potential to perform a variety of useful roles in healthcare. Many promising roles for robots require physical contact with patients and caregivers, which is fraught with both psychological and physical implications. In this thesis, we used a human factors approach to evaluate system performance and participant responses when potential end users performed a healthcare task involving physical contact with a robot. We performed four human-robot interaction studies with 100 people who were not experts in robotics (naive participants). We show that physical contact between naive participants and human-scale mobile manipulators can be acceptable and effective in a variety of healthcare contexts. In this thesis, we investigated two forms of touch-based (haptic) interaction relevant to healthcare. First, we studied how participants responded to physical contact initiated by an autonomous robotic nurse. On average, people responded favorably to robot-initiated touch when the robot indicated that it was a necessary part of a healthcare task. However, their responses strongly depended on what they thought the robot's intentions were, which suggests that this will be an important consideration for future healthcare robots. Second, we investigated the coordination of whole-body motion between human-scale robots and people by the application of forces to the robot's hands and arms. Nurses found this haptic interaction to be intuitive and preferred it over a standard gamepad interface. They also navigated the robot through a cluttered healthcare environment in less time, with fewer collisions, and with less cognitive load via haptic interaction. Through a study with expert dancers, we demonstrated the feasibility of robots as dance-based exercise partners. The experts rated a robot that used only haptic interaction to be a good follower according to subjective measures of dance quality. We also determined that healthy older adults were accepting of using a robot for partner dance-based exercise. On average, they found the robot easy and enjoyable to use and that it performed a partnered stepping task well. The findings in this work make several impacts on the design of robots in healthcare. We found that the perceived intent of robot-initiated touch significantly influenced people's responses. Thus, we determined that autonomous robots that initiate touch with patients can be acceptable in some contexts. This result highlights the importance of considering the psychological responses of users when designing physical human-robot interactions in addition to considering the mechanics of performing tasks. We found that naive users across three user groups could quickly learn how to effectively use physical interaction to lead a robot during navigation, positioning, and partnered stepping tasks. These consistent results underscore the value of using physical interaction to enable users of varying backgrounds to lead a robot during whole-body motion coordination across different healthcare contexts.
75

Étude des conditions d'acceptabilité de la collaboration homme-robot en utilisant la réalité virtuelle / Assessing the acceptability of human-robot collaboration using virtual reality

Weistroffer, Vincent 11 December 2014 (has links)
Que ce soit dans un contexte industriel ou quotidien, les robots deviennent de plus en plus présents dans notre environnement et sont désormais capables d'interagir avec des humains. Dans les milieux industriels, des robots viennent notamment assister les opérateurs des chaînes d'assemblage pour des tâches fatigantes et dangereuses. Robots et opérateurs sont alors amenés à partager le même espace physique (coprésence) et à effectuer des tâches en commun (collaboration). Alors que la sécurité des humains à proximité des robots doit être garantie à tout instant, il convient également de déterminer si le travail collaboratif est accepté par les opérateurs, en termes d'utilisabilité et d'utilité.La première problématique de la thèse consiste à déterminer quelles sont les composantes importantes rentrant en jeu dans l'acceptabilité de la collaboration homme-robot, du point de vue des opérateurs. Différents facteurs peuvent influencer cette acceptabilité : l'apparence des robots et leurs mouvements, la distance de sécurité ou encore le mode d'interaction avec le robot.Afin d'étudier le maximum de facteurs, nous proposons d'utiliser la réalité virtuelle pour mener des tests utilisateurs en environnement virtuel. Nous utilisons des questionnaires pour recueillir les impressions subjectives des opérateurs et des mesures physiologiques pour estimer leur état affectif (stress, effort). La deuxième problématique de la thèse consiste à déterminer si une méthodologie utilisant la réalité virtuelle est pertinente pour cette évaluation : les résultats issus des tests en environnement virtuel rendent-ils bien compte de la situation réelle ?Pour répondre aux problématiques de la thèse, trois cas d'étude ont été mis en place et quatre expérimentations ont été menées. Deux de ces expérimentations ont été reproduites à la fois en environnements réel et virtuel afin d'évaluer la pertinence des résultats issus de la situation virtuelle par rapport à la situation réelle. / Either in the context of the industry or of the everyday life, robots are becoming more and more present in our environment and are nowadays able to interact with humans. In industrial environments, robots now assist operators on the assembly lines for difficult and dangerous tasks. Then, robots and operators need to share the same physical space (copresence) and to manage common tasks (collaboration). On the one side, the safety of humans working near robots has to be guaranteed at all time. On the other hand, it is necessary to determine if such a collaborative work is accepted by the operators, in terms of usability and utility.The first problematic of the thesis consists in determining the important criteria that play a role in the acceptability, from the operators' point of view. Different factors can influence this acceptability: robot appearance, robot movements, safety distance or interaction modes with the robot.In order to study as many factors as possible, we intend to use virtual reality to perform user studies in virtual environments. We are using questionnaires to gather subjective impressions from operators and physiological measures to estimate their affective states (stress, effort). The second problematic of the thesis consists in determining if a methodology using virtual reality is relevant for this evaluation: can the results from studies in virtual environments be reproducible in equivalent physical situations?In order to answer the problematics of the thesis, three use cases have been implemented and four studies have been performed. Two of those studies rely on a physical situation and its virtual reality counterpart in order to evaluate the relevance of the results of the virtual situation compared to the real situation.
76

Reading with Robots: A Platform to Promote Cognitive Exercise through Identification and Discussion of Creative Metaphor in Books

Parde, Natalie 08 1900 (has links)
Maintaining cognitive health is often a pressing concern for aging adults, and given the world's shifting age demographics, it is impractical to assume that older adults will be able to rely on individualized human support for doing so. Recently, interest has turned toward technology as an alternative. Companion robots offer an attractive vehicle for facilitating cognitive exercise, but the language technologies guiding their interactions are still nascent; in elder-focused human-robot systems proposed to date, interactions have been limited to motion or buttons and canned speech. The incapacity of these systems to autonomously participate in conversational discourse limits their ability to engage users at a cognitively meaningful level. I addressed this limitation by developing a platform for human-robot book discussions, designed to promote cognitive exercise by encouraging users to consider the authors' underlying intentions in employing creative metaphors. The choice of book discussions as the backdrop for these conversations has an empirical basis in neuro- and social science research that has found that reading often, even in late adulthood, has been correlated with a decreased likelihood to exhibit symptoms of cognitive decline. The more targeted focus on novel metaphors within those conversations stems from prior work showing that processing novel metaphors is a cognitively challenging task, for young adults and even more so in older adults with and without dementia. A central contribution arising from the work was the creation of the first computational method for modelling metaphor novelty in word pairs. I show that the method outperforms baseline strategies as well as a standard metaphor detection approach, and additionally discover that incorporating a sentence-based classifier as a preliminary filtering step when applying the model to new books results in a better final set of scored word pairs. I trained and evaluated my methods using new, large corpora from two sources, and release those corpora to the research community. In developing the corpora, an additional contribution was the discovery that training a supervised regression model to automatically aggregate the crowdsourced annotations outperformed existing label aggregation strategies. Finally, I show that automatically-generated questions adhering to the Questioning the Author strategy are comparable to human-generated questions in terms of naturalness, sensibility, and question depth; the automatically-generated questions score slightly higher than human-generated questions in terms of clarity. I close by presenting findings from a usability evaluation in which users engaged in thirty-minute book discussions with a robot using the platform, showing that users find the platform to be likeable and engaging.
77

Blinking in human communicative behaviour and its reproduction in artificial agents

Ford, Christopher Colin January 2014 (has links)
A significant year-on-year rise in the creation and sales of personal and domestic robotic systems and the development of online embodied communicative agents (ECAs) has in parallel seen an increase in end-users from the public domain interacting with these systems. A number of these robotic/ECA systems are defined as social, whereby they are physically designed to resemble the bodily structure of a human and behaviorally designed to exist within human social surroundings. Their behavioural design is especially important with respect to communication as it is commonly stated that for any social robotic/ECA system to be truly useful within its role, it will need to be able to effectively communicate with its human users. Currently however, the act of a human user instructing a social robotic/ECA system to perform a task highlights many areas of contention in human communication understanding. Commonly, social robotic/ECA systems are embedded with either non-human-like communication interfaces or deficient imitative human communication interfaces, neither of which reach the levels of communicative interaction expected by human users, leading to communication difficulties which in turn create negative association with the social robotic/ECA system in its users. These communication issues lead to a strong requirement for the development of more effective imitative human communication behaviours within these systems. This thesis presents findings from our research into human non-verbal facial behaviour in communication. The objective of the work was to improve communication grounding between social robotic/ECA systems and their human users through the conceptual design of a computational system of human non-verbal facial behaviour (which in human-human communicative behaviour is shown to carry in the range of 55% of the intended semantic meaning of a transferred message) and the development of a highly accurate computational model of human blink behaviour and a computational model of physiological saccadic eye movement in human-human communication, enriching the human-like properties of the facial non-verbal communicative feedback expressed by the social robotic/ECA system. An enhanced level of interaction would likely be achieved, leading to increased empathic response from the user and an improved chance of a satisfactory communicative conclusion to a user’s task requirement instructions. The initial focus of the work was in the capture, transcription and analysis of common human non-verbal facial behavioural traits within human-human communication, linked to the expression of mental communicative states of understanding, uncertainty, misunderstanding and thought. Facial Non-Verbal behaviour data was collected and transcribed from twelve participants (six female) through a dialogue-based communicative interaction. A further focus was the analysis of blink co-occurrence with other traits of human-human communicative non-verbal facial behaviour and the capture of saccadic eye movement at common proxemic distances. From these data analysis tasks, the computational models of human blink behaviour and saccadic eye movement behaviour whilst listening / speaking within human-human communication were designed and then implemented within the LightHead social robotic system. Human-based studies on the perception of naïve users of the imitative probabilistic computational blink model performance on the LightHead robotic system are presented and the results discussed. The thesis concludes on the impact of the work along with suggestions for further studies towards the improvement of the important task of achieving seamless interactive communication between social robotic/ECA systems and their human users.
78

A study of non-linguistic utterances for social human-robot interaction

Read, Robin January 2014 (has links)
The world of animation has painted an inspiring image of what the robots of the future could be. Taking the robots R2D2 and C3PO from the Star Wars films as representative examples, these robots are portrayed as being more than just machines, rather, they are presented as intelligent and capable social peers, exhibiting many of the traits that people have also. These robots have the ability to interact with people, understand us, and even relate to us in very personal ways through a wide repertoire of social cues. As robotic technologies continue to make their way into society at large, there is a growing trend toward making social robots. The field of Human-Robot Interaction concerns itself with studying, developing and realising these socially capable machines, equipping them with a very rich variety of capabilities that allow them to interact with people in natural and intuitive ways, ranging from the use of natural language, body language and facial gestures, to more unique ways such as expression through colours and abstract sounds. This thesis studies the use of abstract, expressive sounds, like those used iconically by the robot R2D2. These are termed Non-Linguistic Utterances (NLUs) and are a means of communication which has a rich history in film and animation. However, very little is understood about how such expressive sounds may be utilised by social robots, and how people respond to these. This work presents a series of experiments aimed at understanding how NLUs can be utilised by a social robot in order to convey affective meaning to people both young and old, and what factors impact on the production and perception of NLUs. Firstly, it is shown that not all robots should use NLUs. The morphology of the robot matters. People perceive NLUs differently across different robots, and not always in a desired manner. Next it is shown that people readily project affective meaning onto NLUs though not in a coherent manner. Furthermore, people's affective inferences are not subtle, rather they are drawn to well established, basic affect prototypes. Moreover, it is shown that the valence of the situation in which an NLU is made, overrides the initial valence of the NLU itself: situational context biases how people perceive utterances made by a robot, and through this, coherence between people in their affective inferences is found to increase. Finally, it is uncovered that NLUs are best not used as a replacement to natural language (as they are by R2D2), rather, people show a preference for them being used alongside natural language where they can play a supportive role by providing essential social cues.
79

Robots learning actions and goals from everyday people

Akgun, Baris 07 January 2016 (has links)
Robots are destined to move beyond the caged factory floors towards domains where they will be interacting closely with humans. They will encounter highly varied environments, scenarios and user demands. As a result, programming robots after deployment will be an important requirement. To address this challenge, the field of Learning from Demonstration (LfD) emerged with the vision of programming robots through demonstrations of the desired behavior instead of explicit programming. The field of LfD within robotics has been around for more than 30 years and is still an actively researched field. However, very little research is done on the implications of having a non-robotics expert as a teacher. This thesis aims to bridge this gap by developing learning from demonstration algorithms and interaction paradigms that allow non-expert people to teach robots new skills. The first step of the thesis was to evaluate how non-expert teachers provide demonstrations to robots. Keyframe demonstrations are introduced to the field of LfD to help people teach skills to robots and compared with the traditional trajectory demonstrations. The utility of keyframes are validated by a series of experiments with more than 80 participants. Based on the experiments, a hybrid of trajectory and keyframe demonstrations are proposed to take advantage of both and a method was developed to learn from trajectories, keyframes and hybrid demonstrations in a unified way. A key insight from these user experiments was that teachers are goal oriented. They concentrated on achieving the goal of the demonstrated skills rather than providing good quality demonstrations. Based on this observation, this thesis introduces a method that can learn actions and goals from the same set of demonstrations. The action models are used to execute the skill and goal models to monitor this execution. A user study with eight participants and two skills showed that successful goal models can be learned from non- expert teacher data even if the resulting action models are not as successful. Following these results, this thesis further develops a self-improvement algorithm that uses the goal monitoring output to improve the action models, without further user input. This approach is validated with an expert user and two skills. Finally, this thesis builds an interactive LfD system that incorporates both goal learning and self-improvement and evaluates it with 12 naive users and three skills. The results suggests that teacher feedback during experiments increases skill execution and monitoring success. Moreover, non-expert data can be used as a seed to self-improvement to fix unsuccessful action models.
80

Facilitating play between children with autism and an autonomous robot

Francois, Dorothee C. M. January 2009 (has links)
This thesis is part of the Aurora project, an ongoing long-term project investigating the potential use of robots to help children with autism overcome some of their impairments in social interaction, communication and imagination. Autism is a spectrum disorder and children with autism have different abilities and needs. Related research has shown that robots can play the role of a mediator for social interaction in the context of autism. Robots can enable simple interactions, by initially providing a relatively predictable environment for play. Progressively, the complexity of the interaction can be increased. The purpose of this thesis is to facilitate play between children with autism and an autonomous robot. Children with autism have a potential for play but often encounter obstacles to actualize this potential. Through play, children can develop multidisciplinary skills, involving social interaction, communication and imagination. Besides, play is a medium for self-expression. The purpose here is to enable children with autism to experience a large range of play situations, ranging from dyadic play with progressively better balanced interaction styles, to situations of triadic play with both the robot and the experimenter. These triadic play situations could also involve symbolic or pretend play. This PhD work produced the following results: • A new methodological approach of how to design, conduct and analyse robotassisted play was developed and evaluated. This approach draws inspiration from non-directive play therapy where the child is the main leader for play and the experimenter participates in the play sessions. I introduced a regulation process which enables the experimenter to intervene under precise conditions in order to: i) prevent the child from entering or staying in repetitive behaviours, ii) provide bootstrapping that helps the child reach a situation of play she is about to enter and iii) ask the child questions dealing with affect or reasoning about the robot. This method has been tested in a long-term study with six children with autism. Video recordings of the play sessions were analysed in detail according to three dimensions, namely Play, Reasoning and Affect. Results have shown the ability of this approach to meet each child’s specific needs and abilities. Future work may develop this work towards a novel approach in autism therapy. • A novel and generic computational method for the automatic recognition of human-robot interaction styles (specifically gentleness and frequency of touch interaction) in real time was developed and tested experimentally. This method, the Cascaded Information Bottleneck Method, is based on an information theoretic approach. It relies on the principle that the relevant information can be progressively extracted from a time series with a cascade of successive bottlenecks sharing the same cardinality of bottleneck states but trained successively. This method has been tested with data that had been generated with a physical robot a) during human-robot interactions in laboratory conditions and b) during child-robot interactions in school. The method shows a sound recognition of both short-term and mid-term time scale events. The recognition process only involves a very short delay. The Cascaded Information Bottleneck is a generic method that can potentially be applied to various applications of socially interactive robots. • A proof-of-concept system of an adaptive robot was demonstrated that is responsive to different styles of interaction in human-robot interaction. Its impact was evaluated in a short-term study with seven children with autism. The recognition process relies on the Cascaded Information Bottleneck Method. The robot rewards well-balanced interaction styles. The study shows the potential of the adaptive robot i) to encourage children to engage more in the interaction and ii) to positively influence the children’s play styles towards better balanced interaction styles. It is hoped that this work is a step forward towards socially adaptive robots as well as robot-assisted play for children with autism.

Page generated in 1.1195 seconds