Spelling suggestions: "subject:"human robotinteraction"" "subject:"human bodyinteraction""
141 |
Enhancing human-robot interaction using mixed realityMolina Morillas, Santiago January 2023 (has links)
Industry 4.0 is a new phase of industrial growth that has been ushered in by the quick development of digital technologies like the Internet of Things (IoT), artificial intelligence (AI), and robots. Collaborative robotic products have appeared in this changing environment, enabling robots to collaborate with people in open workspaces. The paradigm changes away from autonomous robotics and toward collaborative human-robot interaction (HRI) has made it necessary to look at novel ways to improve output, effectiveness, and security. Many benefits, including more autonomy and flexibility, have been made possible by the introduction of Autonomous Mobile Robots (AMRs) and later Automated Guided Vehicles (AGVs) for material handling. However, this incorporation of robots into communal workspaces also brings up safety issues that must be taken into account. This thesis aims to address potential threats arising from the increasing automation in shopfloors and shared workplaces between AMRs and human operators by exploring the capabilities of Mixed Reality (MR) technologies. By harnessing MR's capabilities, the aim is to mitigate safety concerns and optimize the effectiveness of collaborative environments. To achieve this the research is structured around the following sub-objectives: the development of a communication network enabling interaction among all devices in the shared workspace and the creation of a MR user interface promoting accessibility for human operators. A comprehensive literature review was conducted to analyse existing proposals aimed at improving HRI through various techniques and approaches. The objective was to leverage MR technologies to enhance collaboration and address safety concerns, thereby ensuring the smooth integration of AMRs into shared workspaces. While the literature review revealed limited research utilizing MR for data visualization in this specific domain, the goal of this thesis was to go beyond existing solutions by developing a comprehensive approach that prioritizes safety and facilitates operator adaptation. The research findings highlight the superiority of MR in displaying critical information regarding robot intentions and identifying safe zones with reduced AMR activity. The utilization of HoloLens 2 devices, known for their ergonomic design, ensures operator comfort during extended use while enhancing the accuracy of tracking positions and intentions in highly automated environments. The presented information is designed to be concise, customizable, and easily comprehensible, preventing information overload for operators. The implementation of MR technologies within shared workspaces necessitates ethical considerations, including transparent data collection and user consent. Building trust is essential to establish MR as a reliable tool that enhances operator working conditions and safety. Importantly, the integration of MR technologies does not pose a threat to job displacement but rather facilitates the smooth adaptation of new operators to collaborative environments. The implemented features augment existing safety protocols without compromising efficacy, resulting in an overall improvement in safety within the collaborative workspace. In conclusion, this research showcases the effectiveness of MR technologies in bolstering HRI, addressing safety concerns, and enhancing operator working conditions within collaborative shopfloor environments. Despite encountering limitations in terms of time, complexity, and available information, the developed solution showcases the potential for further improvements. The chosen methodology and philosophical paradigm have successfully attained the research objectives, and crucial ethical considerations have been addressed. Ultimately, this thesis proposes and provides a comprehensive explanation for potential future implementations, aiming to expand the actual capabilities of the solution.
|
142 |
The Effects of a Humanoid Robot's Non-lexical Vocalization on Emotion Recognition and Robot PerceptionLiu, Xiaozhen 30 June 2023 (has links)
As robots have become more pervasive in our everyday life, social aspects of robots have attracted researchers' attention. Because emotions play a key role in social interactions, research has been conducted on conveying emotions via speech, whereas little research has focused on the effects of non-speech sounds on users' robot perception. We conducted a within-subjects exploratory study with 40 young adults to investigate the effects of non-speech sounds (regular voice, characterized voice, musical sound, and no sound) and basic emotions (anger, fear, happiness, sadness, and surprise) on user perception. While listening to the fairytale with the participant, a humanoid robot (Pepper) responded to the story with a recorded emotional sound with a gesture. Participants showed significantly higher emotion recognition accuracy from the regular voice than from other sounds. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which aligns with the previous research. Regular voice also induced higher trust, naturalness, and preference compared to other sounds. Interestingly, musical sound mostly showed lower perceptions than no sound.
A further exploratory study was conducted with an additional 49 young people to investigate the effect of regular non-verbal voices (female voices and male voices) and basic emotions (happiness, sadness, anger, and relief) on user perception. We also further explored the impact of participants' gender on emotion and social perception toward robot Pepper. While listening to a fairy tale with the participants, a humanoid robot (Pepper) responded to the story with gestures and emotional voices. Participants showed significantly higher emotion recognition accuracy and social perception from the voice + Gesture condition than Gesture only conditions. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which aligns with the previous research. Interestingly, participants felt more discomfort and anthropomorphism in male voices compared to female voices. Male participants were more likely to feel uncomfortable when interacting with Pepper. In contrast, female participants were more likely to feel warm. However, the gender of the robot voice or the gender of the participant did not affect the accuracy of emotion recognition. Results are discussed with social robot design guidelines for emotional cues and future research directions. / Master of Science / As robots increasingly appear in people's lives as functional assistants or for entertainment, there are more and more scenarios in which people interact with robots. More research on human-robot interaction is being proposed to help develop more natural ways of interaction. Our study focuses on the effects of emotions conveyed by a humanoid robot's non-speech sounds on people's perception about the robot and its emotions. The results of our experiments show that the accuracy of emotion recognition of regular voices is significantly higher than that of music and robot-like voices and elicits higher trust, naturalness, and preference. The gender of the robot's voice or the gender of the participant did not affect the accuracy of emotion recognition. People are now not inclined to traditional stereotypes of robotic voices (e.g., like old movies), and expressing emotions with music and gestures mostly shows a lower perception. Happiness and sadness were identified with the highest accuracy among the emotions we studied. Participants felt more discomfort and human-likeness in the male voices than in female voices. Male participants were more likely to feel uncomfortable when interacting with the humanoid robot, while female participants were more likely to feel warm. Our study discusses design guidelines and future research directions for emotional cues in social robots.
|
143 |
From a Machine to a CollaboratorBozorgmehrian, Shokoufeh 05 January 2024 (has links)
This thesis book represents an exploration of the relationship between architecture and robotics, tailored to meet the requirements of both architecture students and professionals and any other creative user. The investigation encompasses three distinct robotic arm applications for architecture students, introduces and evaluates an innovative 3D printing application with robotic arms, and presents projects focused on the design of human-robot interaction techniques and their system development. Furthermore, the thesis showcases the development of a more intuitive human-robot interaction system and explores various user interaction methods with robotic arms for rapid prototyping and fabrication. Each experiment describes the process, level of interaction, and key takeaways. The narrative of the thesis unfolds as a journey through different applications of robotic fabrication, emphasizing the creative human as the focal point of these systems. This thesis underscores the significance of user experience research and anticipates future innovations in the evolving landscape of the creative field. The discoveries made in this exploration lay a foundation for the study and design of interfaces and interaction techniques, fostering seamless collaboration between designers and robotic systems. Keywords: Robotic Fabrication - Human-Robot Interaction (HRI) - Human-Computer Interaction (HCI) - User Experience Research - Human-Centered Design - Architecture - Art - Creative Application / Master of Architecture
|
144 |
Learning Video Representation from Self-supervisionChen, Brian January 2023 (has links)
This thesis investigates the problem of learning video representations for video understanding. Previous works have explored the use of data-driven deep learning approaches, which have been shown to be effective in learning useful video representations. However, obtaining large amounts of labeled data can be costly and time-consuming. We investigate self-supervised approach as for multimodal video data to overcome this challenge. Video data typically contains multiple modalities, such as visual, audio, transcribed speech, and textual captions, which can serve as pseudo-labels for representation learning without needing manual labeling. By utilizing these modalities, we can train deep representations over large-scale video data consisting of millions of video clips collected from the internet. We demonstrate the scalability benefits of multimodal self-supervision by achieving new state-of-the-art performance in various domains, including video action recognition, text-to-video retrieval, and text-to-video grounding.
We also examine the limitations of these approaches, which often rely on the association assumption involving multiple modalities of data used in self-supervision. For example, the text transcript is often assumed to be about the video content, and two segments of the same video share similar semantics. To overcome this problem, we propose new methods for learning video representations with more intelligent sampling strategies to capture samples that share high-level semantics or consistent concepts. The proposed methods include a clustering component to address false negative pairs in multimodal paired contrastive learning, a novel sampling strategy for finding visually groundable video-text pairs, an investigation of object tracking supervision for temporal association, and a new multimodal task for demonstrating the effectiveness of the proposed model. We aim to develop more robust and generalizable video representations for real-world applications, such as human-to-robot interaction and event extraction from large-scale news sources.
|
145 |
Low-Cost, Real-Time Face Detection, Tracking and Recognition for Human-Robot InteractionsZhang, Yan 29 June 2011 (has links)
No description available.
|
146 |
Development of a Low-Cost Social Robot for Personalized Human-Robot InteractionPuehn, Christian G. 03 June 2015 (has links)
No description available.
|
147 |
Efficient and Robust Video Understanding for Human-robot Interaction and DetectionLi, Ying 09 October 2018 (has links)
No description available.
|
148 |
Visual contributions to spatial perception during a remote navigation taskEshelman-Haynes, Candace Lee 28 July 2009 (has links)
No description available.
|
149 |
Human-Robot Interactive ControlJou, Yung-Tsan January 2003 (has links)
No description available.
|
150 |
Adapting the backchanneling behaviour of a social robot to increase user engagement : A study using social robots with contingent backchanneling behaviour / Adaptiv generering av stödsignaler hos en social robot för att öka användarengagemangKazzi, Daniel Alexander, Winberg, VIncent January 2022 (has links)
There are many aspects of human communication that affects the nature of an interaction; examples include voice intonation and facial expressions. A particular type of verbal and non-verbal cues, so called backchannels, have an underlying role in shaping conversations. In this study we analyse how backchannels can affect engagement of two participants engaged in a task with a social robot. Furthermore, given the ever increasing interest in using social robots in service contexts, we analyse the current level of customer acceptance for social robots and which aspects the participants think is important when interacting with one in a service setting using interviews. The social robot produces contingent backchannels based on engagement levels to increase the participation of the least speaking participant. An interview was conducted after the experiment to analyse the participants attitudes towards the use of social robots in services. 40 people participated in pairs of two, where each pair was assigned to either the experimental or the control condition. In the experimental condition the backchannels were targeted towards the least dominant speaker and in the control setting the backchannels were randomly generated. Each pair consisted of one native speaker and one language learner of Swedish. The results showed that in the experimental condition the least dominant speaker increased their speech, as well as evening out the participation. The interviews showed mixed attitudes towards the use of social robots in service with some expressing hesitancy regarding the robots ability to understand speaker’s desire. / Det finns många aspekter i kommunikation mellan människor som har inverkan över hur en konversation uppfattas, exempelvis röstläge och ansiktsuttryck. En samling av dessa verbala och icke-verbala signaler kallas för stödsignaler och fyller en viktig roll i konversationer. Denna rapport undersöker hur adaptiv generering av dessa signaler hos en social robot kan öka engagemang hos två deltagare som spelar ett språkinlärningsspel tillsammans med en denna robot. Givet ett allt större intresse för att tillämpa sociala robotar i en tjänstemiljö kommer denna rapport även undersöka nuvarande attityder gällande detta hos deltagarna. Spelet spelades på svenska och de två deltagarna bestod av en modersmålstalare och en icke-modersmålstalare. Efter experimenten utfördes en kort intervju för att undersöka deltagarnas attityd gällande användningen av sociala robotar i tjänstesammanhang. Totalt deltog 40 deltagare uppdelade i 20 olika par, där varje grupp antingen spelade spelet under det experimentella förhållandet, där s.k. stödsignaler var riktade mot den deltagare som talade minst, eller kontrollförhållandet där stödsignalerna genererades slumpmässigt. Resultaten visade på att i det experimentella förhållandet så ökade den totala taltiden hos den talare med lägre språkfärdighet jämfört med under kontrollförhållandet, samtidigt som konversationen också blev jämnare mellan de två deltagarna under det experimentella förhållandet. Intervjuerna visade också över lag en positiv attityd gentemot användning av robotar i tjänstemiljöer, dock rådde det en viss skepsis då vissa var oroliga inför att roboten inte skulle förstå vad man sade.
|
Page generated in 0.1203 seconds