• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 214
  • 24
  • 18
  • 18
  • 10
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 382
  • 382
  • 310
  • 126
  • 107
  • 69
  • 63
  • 63
  • 57
  • 52
  • 50
  • 49
  • 45
  • 43
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

The Effects Of Diagnostic Aiding On Situation Awareness Under Robot Unreliability

Schuster, David 01 January 2013 (has links)
In highly autonomous robotic systems, human operators are able to attend to their own, separate tasks, but robots still need occasional human intervention. In this scenario, it may be difficult for human operators to determine the status of the system and environment when called upon to aid the robot. The resulting lack of situation awareness (SA) is a problem common to other automated systems, and it can lead to poor performance and compromised safety. Existing research on this problem suggested that reliable automation of information processing, called diagnostic aiding, leads to better operator SA. The effects of unreliable diagnostic aiding, however, were not well understood. These effects are likely to depend on the ability of the operator to perform the task unaided. That is, under conditions in which the operator can reconcile their own sensing with that of the robot, the influence of unreliable diagnostic aiding may be more pronounced. When the robot is the only source of information for a task, these effects may be weaker or may reverse direction. The purpose of the current experiment was to determine if SA is differentially affected by unreliability at different levels of unaided human performance and at different stages of diagnostic aiding. This was accomplished by experimentally manipulating the stage of diagnostic aiding, robot reliability, and the ability of the operator to build SA unaided. Results indicated that while reliable diagnostic aiding is generally useful, unreliable diagnostic aiding has effects that depend on the amount of information available to operators in the environment. This research improves understanding of how robots can support operator SA and can guide the development of future robots so that humans are most likely to use them effectively.
172

Influence Of Task-role Mental Models On Human Interpretation Of Robot Motion Behavior

Ososky, Scott 01 January 2013 (has links)
The transition in robotics from tools to teammates has begun. However, the benefit autonomous robots provide will be diminished if human teammates misinterpret robot behaviors. Applying mental model theory as the organizing framework for human understanding of robots, the current empirical study examined the influence of task-role mental models of robots on the interpretation of robot motion behaviors, and the resulting impact on subjective ratings of robots. Observers (N = 120) were exposed to robot behaviors that were either congruent or incongruent with their task-role mental model, by experimental manipulation of preparatory robot task-role information to influence mental models (i.e., security guard, groundskeeper, or no information), the robot's actual task-role behaviors (i.e., security guard or groundskeeper), and the order in which these robot behaviors were presented. The results of the research supported the hypothesis that observers with congruent mental models were significantly more accurate in interpreting the motion behaviors of the robot than observers without a specific mental model. Additionally, an incongruent mental model, under certain circumstances, significantly hindered an observer's interpretation accuracy, resulting in subjective sureness of inaccurate interpretations. The strength of the effects that mental models had on the interpretation and assessment of robot behaviors was thought to have been moderated by the ease with which a particular mental model could reasonably explain the robot's behavior, termed mental model applicability. Finally, positive associations were found between differences in observers' interpretation accuracy and differences in subjective ratings of robot intelligence, safety, and trustworthiness. The current research offers implications for the relationships between mental model components, as well as implications for designing robot behaviors to appear more transparent, or opaque, to humans.
173

User-centred design of an outreach robot / Användarcentrerad design av en uppsökande robot

He, Ying January 2023 (has links)
The goal of this project is to involve adolescents in the design of their own social robots, and to explore their concerns and opinions about social robots during the design process. To support their design efforts, I have developed a digital toolkit that includes features for customizing the appearance, personality, and reactive behaviors of the robots. In addition, this paper presents some of the adolescents’ views on gender and robots that were elicited during the project. The insights and feedback from the participants can inform the design of future outreach robots and improve their social interactions with adolescents. / Målet med detta projekt är att involvera ungdomar i designen av sina egna sociala robotar, och att utforska deras oro och åsikter om sociala robotar under designprocessen. För att stödja deras designinsatser har jag utvecklat en digital verktygslåda som innehåller funktioner för att anpassa robotarnas utseende, personlighet och reaktiva beteenden. Dessutom presenterar denna artikel några av ungdomarnas syn på genus och robotar som framkallades under projektet. Insikterna och feedbacken från deltagarna kan informera om utformningen av framtida uppsökande robotar och förbättra deras sociala interaktion med ungdomar.
174

A Behavioral Approach to Human-Robot Communication

Ou, Shichao 01 February 2010 (has links)
Robots are increasingly capable of co-existing with human beings in the places where we live and work. I believe, however, for robots to collaborate and assist human beings in their daily lives, new methods are required for enhancing humanrobot communication. In this dissertation, I focus on how a robot can acquire and refine expressive and receptive communication skills with human beings. I hypothesize that communication has its roots in motor behavior and present an approach that is unique in the following aspects: (1) representations of humans and the skills for interacting with them are learned in the same way as the robot learns to interact with other “objects,” (2) expressive behavior naturally emerges as the result of the robot discovering new utility in existing manual behavior in a social context, and (3) symmetry in communicative behavior can be exploited to bootstrap the learning of receptive behavior. Experiments have been designed to evaluate the approach: (1) as a computational framework for learning increasingly comprehensive models and behavior for communicating with human beings and, (2) from a human-robot interaction perspective that can adapt to a variety of human behavior. Results from these studies illustrate that the robot successfully acquired a variety of expressive pointing gestures using multiple limbs and eye gaze, and the perceptual skills with which to recognize and respond to similar gestures from humans. Due to variations in human reactions over the training subjects, the robot developed a preference for certain gestures over others. These results support the experimental hypotheses and offer insights for extensions of the computation framework and experimental designs for future studies.
175

Enabling Successful Human-Robot Interaction Through Human-Human Co-Manipulation Analysis, Soft Robot Modeling, and Reliable Model Evolutionary Gain-Based Predictive Control (MEGa-PC)

Jensen, Spencer W. 11 July 2022 (has links)
Soft robots are inherently safer than traditional robots due to their compliance and high power density ratio resulting in lower accidental impact forces. Thus they are a natural option for human-robot interaction. This thesis specifically looked at human-robot co-manipulation which is defined as a human and a robot working together to move an object too large or awkward to be safely maneuvered by a single agent. To better understand how humans communicate while co-manipulating an object, this work looked at haptic interaction of human-human dyadic co-manipulation trials and studied some of the trends found in that interaction. These trends point to ways robots can effectively work with human partners in the future. Before successful human-robot co-manipulation with large-scale soft robots can be achieved, low-level joint angle control is needed. Low-level model predictive control of soft robot joints requires a sufficiently accurate model of the system. This thesis introduces a recursive Newton-Euler method for deriving the dynamics that is sufficiently accurate and accounts for flexible joints in an intuitive way. This model has been shown to be accurate to a median absolute error of 3.15 degrees for a three-link three-joint six degree of freedom soft robot arm. Once a sufficiently accurate model was developed, a gain-based evolutionary model predictive control (MPC) technique was formulated based on a previous evolutionary MPC technique. This new method is referred to as model evolutionary gain-based predictive control or MEGa-PC. This control law is compared to nonlinear evolutionary model predictive control (NEMPC). The new technique allows intentionally decreasing the control frequency to 10 Hz while maintaining control of the system. This is proven to help MPC solve more difficult problems by having the ability to extend the control horizon. This new controller is also demonstrated to work well on a three-joint three-link soft robot arm. Although complete physical human-robot co-manipulation is outside the scope of this thesis, this thesis covers three main building blocks for physical human and soft robot co-manipulation: human-human haptic communication, soft robot modeling, and model evolutionary gain-based predictive control.
176

Moderating Influence as a Design Principle for Human-Swarm Interaction

Ashcraft, C Chace 01 April 2019 (has links)
Robot swarms have recently become of interest in both industry and academia for their potential to perform various difficult or dangerous tasks efficiently. As real robot swarms become more of a possibility, many desire swarms to be controlled or directed by a human, which raises questions regarding how that should be done. Part of the challenge of human-swarm interaction is the difficulty of understanding swarm state and how to drive the swarm to produce emergent behaviors. Human input could inhibit desirable swarm behaviors if their input is poor and has sufficient influence over swarm agents, affecting its overall performance. Thus, with too little influence, human input is useless, but with too much, it can be destructive. We suggest that there is some middle level, or interval, of human influence that allows the swarm to take advantage of useful human input while minimizing the effect of destructive input. Further, we propose that human-swarm interaction schemes can be designed to maintain an appropriate level of human influence over the swarm and maintain or improve swarm performance in the presence of both useful and destructive human input. We test this theory by implementing a piece of software to dynamically moderate influence and then testing it with a simulated honey bee colony performing nest site selection, simulated human input, and actual human input via a user study. The results suggest that moderating influence, as suggested, is important for maintaining high performance in the presence of both useful and destructive human input. However, while our software seems to successfully moderate influence with simulated human input, it fails to do so with actual human input.
177

Socially aware robot navigation

Antonucci, Alessandro 03 November 2022 (has links)
A growing number of applications involving autonomous mobile robots will require their navigation across environments in which spaces are shared with humans. In those situations, the robot’s actions are socially acceptable if they reflect the behaviours that humans would generate in similar conditions. Therefore, the robot must perceive people in the environment and correctly react based on their actions and relevance to its mission. In order to give a push forward to human-robot interaction, the proposed research is focused on efficient robot motion algorithms, covering all the tasks needed in the whole process, such as obstacle detection, human motion tracking and prediction, socially aware navigation, etc. The final framework presented in this thesis is a robust and efficient solution enabling the robot to correctly understand the human intentions and consequently perform safe, legible, and socially compliant actions. The thesis retraces in its structure all the different steps of the framework through the presentation of the algorithms and models developed, and the experimental evaluations carried out both with simulations and on real robotic platforms, showing the performance obtained in real–time in complex scenarios, where the humans are present and play a prominent role in the robot decisions. The proposed implementations are all based on insightful combinations of traditional model-based techniques and machine learning algorithms, that are adequately fused to effectively solve the human-aware navigation. The specific synergy of the two methodology gives us greater flexibility and generalization than the navigation approaches proposed so far, while maintaining accuracy and reliability which are not always displayed by learning methods.
178

Enhancing human-robot interaction using mixed reality

Molina Morillas, Santiago January 2023 (has links)
Industry 4.0 is a new phase of industrial growth that has been ushered in by the quick development of digital technologies like the Internet of Things (IoT), artificial intelligence (AI), and robots. Collaborative robotic products have appeared in this changing environment, enabling robots to collaborate with people in open workspaces. The paradigm changes away from autonomous robotics and toward collaborative human-robot interaction (HRI) has made it necessary to look at novel ways to improve output, effectiveness, and security. Many benefits, including more autonomy and flexibility, have been made possible by the introduction of Autonomous Mobile Robots (AMRs) and later Automated Guided Vehicles (AGVs) for material handling. However, this incorporation of robots into communal workspaces also brings up safety issues that must be taken into account. This thesis aims to address potential threats arising from the increasing automation in shopfloors and shared workplaces between AMRs and human operators by exploring the capabilities of Mixed Reality (MR) technologies. By harnessing MR's capabilities, the aim is to mitigate safety concerns and optimize the effectiveness of collaborative environments. To achieve this the research is structured around the following sub-objectives: the development of a communication network enabling interaction among all devices in the shared workspace and the creation of a MR user interface promoting accessibility for human operators. A comprehensive literature review was conducted to analyse existing proposals aimed at improving HRI through various techniques and approaches. The objective was to leverage MR technologies to enhance collaboration and address safety concerns, thereby ensuring the smooth integration of AMRs into shared workspaces. While the literature review revealed limited research utilizing MR for data visualization in this specific domain, the goal of this thesis was to go beyond existing solutions by developing a comprehensive approach that prioritizes safety and facilitates operator adaptation. The research findings highlight the superiority of MR in displaying critical information regarding robot intentions and identifying safe zones with reduced AMR activity. The utilization of HoloLens 2 devices, known for their ergonomic design, ensures operator comfort during extended use while enhancing the accuracy of tracking positions and intentions in highly automated environments. The presented information is designed to be concise, customizable, and easily comprehensible, preventing information overload for operators.  The implementation of MR technologies within shared workspaces necessitates ethical considerations, including transparent data collection and user consent. Building trust is essential to establish MR as a reliable tool that enhances operator working conditions and safety. Importantly, the integration of MR technologies does not pose a threat to job displacement but rather facilitates the smooth adaptation of new operators to collaborative environments. The implemented features augment existing safety protocols without compromising efficacy, resulting in an overall improvement in safety within the collaborative workspace. In conclusion, this research showcases the effectiveness of MR technologies in bolstering HRI, addressing safety concerns, and enhancing operator working conditions within collaborative shopfloor environments. Despite encountering limitations in terms of time, complexity, and available information, the developed solution showcases the potential for further improvements. The chosen methodology and philosophical paradigm have successfully attained the research objectives, and crucial ethical considerations have been addressed. Ultimately, this thesis proposes and provides a comprehensive explanation for potential future implementations, aiming to expand the actual capabilities of the solution.
179

The Effects of a Humanoid Robot's Non-lexical Vocalization on Emotion Recognition and Robot Perception

Liu, Xiaozhen 30 June 2023 (has links)
As robots have become more pervasive in our everyday life, social aspects of robots have attracted researchers' attention. Because emotions play a key role in social interactions, research has been conducted on conveying emotions via speech, whereas little research has focused on the effects of non-speech sounds on users' robot perception. We conducted a within-subjects exploratory study with 40 young adults to investigate the effects of non-speech sounds (regular voice, characterized voice, musical sound, and no sound) and basic emotions (anger, fear, happiness, sadness, and surprise) on user perception. While listening to the fairytale with the participant, a humanoid robot (Pepper) responded to the story with a recorded emotional sound with a gesture. Participants showed significantly higher emotion recognition accuracy from the regular voice than from other sounds. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which aligns with the previous research. Regular voice also induced higher trust, naturalness, and preference compared to other sounds. Interestingly, musical sound mostly showed lower perceptions than no sound. A further exploratory study was conducted with an additional 49 young people to investigate the effect of regular non-verbal voices (female voices and male voices) and basic emotions (happiness, sadness, anger, and relief) on user perception. We also further explored the impact of participants' gender on emotion and social perception toward robot Pepper. While listening to a fairy tale with the participants, a humanoid robot (Pepper) responded to the story with gestures and emotional voices. Participants showed significantly higher emotion recognition accuracy and social perception from the voice + Gesture condition than Gesture only conditions. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which aligns with the previous research. Interestingly, participants felt more discomfort and anthropomorphism in male voices compared to female voices. Male participants were more likely to feel uncomfortable when interacting with Pepper. In contrast, female participants were more likely to feel warm. However, the gender of the robot voice or the gender of the participant did not affect the accuracy of emotion recognition. Results are discussed with social robot design guidelines for emotional cues and future research directions. / Master of Science / As robots increasingly appear in people's lives as functional assistants or for entertainment, there are more and more scenarios in which people interact with robots. More research on human-robot interaction is being proposed to help develop more natural ways of interaction. Our study focuses on the effects of emotions conveyed by a humanoid robot's non-speech sounds on people's perception about the robot and its emotions. The results of our experiments show that the accuracy of emotion recognition of regular voices is significantly higher than that of music and robot-like voices and elicits higher trust, naturalness, and preference. The gender of the robot's voice or the gender of the participant did not affect the accuracy of emotion recognition. People are now not inclined to traditional stereotypes of robotic voices (e.g., like old movies), and expressing emotions with music and gestures mostly shows a lower perception. Happiness and sadness were identified with the highest accuracy among the emotions we studied. Participants felt more discomfort and human-likeness in the male voices than in female voices. Male participants were more likely to feel uncomfortable when interacting with the humanoid robot, while female participants were more likely to feel warm. Our study discusses design guidelines and future research directions for emotional cues in social robots.
180

From a Machine to a Collaborator

Bozorgmehrian, Shokoufeh 05 January 2024 (has links)
This thesis book represents an exploration of the relationship between architecture and robotics, tailored to meet the requirements of both architecture students and professionals and any other creative user. The investigation encompasses three distinct robotic arm applications for architecture students, introduces and evaluates an innovative 3D printing application with robotic arms, and presents projects focused on the design of human-robot interaction techniques and their system development. Furthermore, the thesis showcases the development of a more intuitive human-robot interaction system and explores various user interaction methods with robotic arms for rapid prototyping and fabrication. Each experiment describes the process, level of interaction, and key takeaways. The narrative of the thesis unfolds as a journey through different applications of robotic fabrication, emphasizing the creative human as the focal point of these systems. This thesis underscores the significance of user experience research and anticipates future innovations in the evolving landscape of the creative field. The discoveries made in this exploration lay a foundation for the study and design of interfaces and interaction techniques, fostering seamless collaboration between designers and robotic systems. Keywords: Robotic Fabrication - Human-Robot Interaction (HRI) - Human-Computer Interaction (HCI) - User Experience Research - Human-Centered Design - Architecture - Art - Creative Application / Master of Architecture

Page generated in 0.1823 seconds