• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 166
  • 20
  • 17
  • 15
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 305
  • 305
  • 305
  • 106
  • 91
  • 59
  • 53
  • 51
  • 41
  • 39
  • 39
  • 39
  • 36
  • 36
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Moderating Influence as a Design Principle for Human-Swarm Interaction

Ashcraft, C Chace 01 April 2019 (has links)
Robot swarms have recently become of interest in both industry and academia for their potential to perform various difficult or dangerous tasks efficiently. As real robot swarms become more of a possibility, many desire swarms to be controlled or directed by a human, which raises questions regarding how that should be done. Part of the challenge of human-swarm interaction is the difficulty of understanding swarm state and how to drive the swarm to produce emergent behaviors. Human input could inhibit desirable swarm behaviors if their input is poor and has sufficient influence over swarm agents, affecting its overall performance. Thus, with too little influence, human input is useless, but with too much, it can be destructive. We suggest that there is some middle level, or interval, of human influence that allows the swarm to take advantage of useful human input while minimizing the effect of destructive input. Further, we propose that human-swarm interaction schemes can be designed to maintain an appropriate level of human influence over the swarm and maintain or improve swarm performance in the presence of both useful and destructive human input. We test this theory by implementing a piece of software to dynamically moderate influence and then testing it with a simulated honey bee colony performing nest site selection, simulated human input, and actual human input via a user study. The results suggest that moderating influence, as suggested, is important for maintaining high performance in the presence of both useful and destructive human input. However, while our software seems to successfully moderate influence with simulated human input, it fails to do so with actual human input.
142

Socially aware robot navigation

Antonucci, Alessandro 03 November 2022 (has links)
A growing number of applications involving autonomous mobile robots will require their navigation across environments in which spaces are shared with humans. In those situations, the robot’s actions are socially acceptable if they reflect the behaviours that humans would generate in similar conditions. Therefore, the robot must perceive people in the environment and correctly react based on their actions and relevance to its mission. In order to give a push forward to human-robot interaction, the proposed research is focused on efficient robot motion algorithms, covering all the tasks needed in the whole process, such as obstacle detection, human motion tracking and prediction, socially aware navigation, etc. The final framework presented in this thesis is a robust and efficient solution enabling the robot to correctly understand the human intentions and consequently perform safe, legible, and socially compliant actions. The thesis retraces in its structure all the different steps of the framework through the presentation of the algorithms and models developed, and the experimental evaluations carried out both with simulations and on real robotic platforms, showing the performance obtained in real–time in complex scenarios, where the humans are present and play a prominent role in the robot decisions. The proposed implementations are all based on insightful combinations of traditional model-based techniques and machine learning algorithms, that are adequately fused to effectively solve the human-aware navigation. The specific synergy of the two methodology gives us greater flexibility and generalization than the navigation approaches proposed so far, while maintaining accuracy and reliability which are not always displayed by learning methods.
143

Enhancing human-robot interaction using mixed reality

Molina Morillas, Santiago January 2023 (has links)
Industry 4.0 is a new phase of industrial growth that has been ushered in by the quick development of digital technologies like the Internet of Things (IoT), artificial intelligence (AI), and robots. Collaborative robotic products have appeared in this changing environment, enabling robots to collaborate with people in open workspaces. The paradigm changes away from autonomous robotics and toward collaborative human-robot interaction (HRI) has made it necessary to look at novel ways to improve output, effectiveness, and security. Many benefits, including more autonomy and flexibility, have been made possible by the introduction of Autonomous Mobile Robots (AMRs) and later Automated Guided Vehicles (AGVs) for material handling. However, this incorporation of robots into communal workspaces also brings up safety issues that must be taken into account. This thesis aims to address potential threats arising from the increasing automation in shopfloors and shared workplaces between AMRs and human operators by exploring the capabilities of Mixed Reality (MR) technologies. By harnessing MR's capabilities, the aim is to mitigate safety concerns and optimize the effectiveness of collaborative environments. To achieve this the research is structured around the following sub-objectives: the development of a communication network enabling interaction among all devices in the shared workspace and the creation of a MR user interface promoting accessibility for human operators. A comprehensive literature review was conducted to analyse existing proposals aimed at improving HRI through various techniques and approaches. The objective was to leverage MR technologies to enhance collaboration and address safety concerns, thereby ensuring the smooth integration of AMRs into shared workspaces. While the literature review revealed limited research utilizing MR for data visualization in this specific domain, the goal of this thesis was to go beyond existing solutions by developing a comprehensive approach that prioritizes safety and facilitates operator adaptation. The research findings highlight the superiority of MR in displaying critical information regarding robot intentions and identifying safe zones with reduced AMR activity. The utilization of HoloLens 2 devices, known for their ergonomic design, ensures operator comfort during extended use while enhancing the accuracy of tracking positions and intentions in highly automated environments. The presented information is designed to be concise, customizable, and easily comprehensible, preventing information overload for operators.  The implementation of MR technologies within shared workspaces necessitates ethical considerations, including transparent data collection and user consent. Building trust is essential to establish MR as a reliable tool that enhances operator working conditions and safety. Importantly, the integration of MR technologies does not pose a threat to job displacement but rather facilitates the smooth adaptation of new operators to collaborative environments. The implemented features augment existing safety protocols without compromising efficacy, resulting in an overall improvement in safety within the collaborative workspace. In conclusion, this research showcases the effectiveness of MR technologies in bolstering HRI, addressing safety concerns, and enhancing operator working conditions within collaborative shopfloor environments. Despite encountering limitations in terms of time, complexity, and available information, the developed solution showcases the potential for further improvements. The chosen methodology and philosophical paradigm have successfully attained the research objectives, and crucial ethical considerations have been addressed. Ultimately, this thesis proposes and provides a comprehensive explanation for potential future implementations, aiming to expand the actual capabilities of the solution.
144

The Effects of a Humanoid Robot's Non-lexical Vocalization on Emotion Recognition and Robot Perception

Liu, Xiaozhen 30 June 2023 (has links)
As robots have become more pervasive in our everyday life, social aspects of robots have attracted researchers' attention. Because emotions play a key role in social interactions, research has been conducted on conveying emotions via speech, whereas little research has focused on the effects of non-speech sounds on users' robot perception. We conducted a within-subjects exploratory study with 40 young adults to investigate the effects of non-speech sounds (regular voice, characterized voice, musical sound, and no sound) and basic emotions (anger, fear, happiness, sadness, and surprise) on user perception. While listening to the fairytale with the participant, a humanoid robot (Pepper) responded to the story with a recorded emotional sound with a gesture. Participants showed significantly higher emotion recognition accuracy from the regular voice than from other sounds. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which aligns with the previous research. Regular voice also induced higher trust, naturalness, and preference compared to other sounds. Interestingly, musical sound mostly showed lower perceptions than no sound. A further exploratory study was conducted with an additional 49 young people to investigate the effect of regular non-verbal voices (female voices and male voices) and basic emotions (happiness, sadness, anger, and relief) on user perception. We also further explored the impact of participants' gender on emotion and social perception toward robot Pepper. While listening to a fairy tale with the participants, a humanoid robot (Pepper) responded to the story with gestures and emotional voices. Participants showed significantly higher emotion recognition accuracy and social perception from the voice + Gesture condition than Gesture only conditions. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which aligns with the previous research. Interestingly, participants felt more discomfort and anthropomorphism in male voices compared to female voices. Male participants were more likely to feel uncomfortable when interacting with Pepper. In contrast, female participants were more likely to feel warm. However, the gender of the robot voice or the gender of the participant did not affect the accuracy of emotion recognition. Results are discussed with social robot design guidelines for emotional cues and future research directions. / Master of Science / As robots increasingly appear in people's lives as functional assistants or for entertainment, there are more and more scenarios in which people interact with robots. More research on human-robot interaction is being proposed to help develop more natural ways of interaction. Our study focuses on the effects of emotions conveyed by a humanoid robot's non-speech sounds on people's perception about the robot and its emotions. The results of our experiments show that the accuracy of emotion recognition of regular voices is significantly higher than that of music and robot-like voices and elicits higher trust, naturalness, and preference. The gender of the robot's voice or the gender of the participant did not affect the accuracy of emotion recognition. People are now not inclined to traditional stereotypes of robotic voices (e.g., like old movies), and expressing emotions with music and gestures mostly shows a lower perception. Happiness and sadness were identified with the highest accuracy among the emotions we studied. Participants felt more discomfort and human-likeness in the male voices than in female voices. Male participants were more likely to feel uncomfortable when interacting with the humanoid robot, while female participants were more likely to feel warm. Our study discusses design guidelines and future research directions for emotional cues in social robots.
145

From a Machine to a Collaborator

Bozorgmehrian, Shokoufeh 05 January 2024 (has links)
This thesis book represents an exploration of the relationship between architecture and robotics, tailored to meet the requirements of both architecture students and professionals and any other creative user. The investigation encompasses three distinct robotic arm applications for architecture students, introduces and evaluates an innovative 3D printing application with robotic arms, and presents projects focused on the design of human-robot interaction techniques and their system development. Furthermore, the thesis showcases the development of a more intuitive human-robot interaction system and explores various user interaction methods with robotic arms for rapid prototyping and fabrication. Each experiment describes the process, level of interaction, and key takeaways. The narrative of the thesis unfolds as a journey through different applications of robotic fabrication, emphasizing the creative human as the focal point of these systems. This thesis underscores the significance of user experience research and anticipates future innovations in the evolving landscape of the creative field. The discoveries made in this exploration lay a foundation for the study and design of interfaces and interaction techniques, fostering seamless collaboration between designers and robotic systems. Keywords: Robotic Fabrication - Human-Robot Interaction (HRI) - Human-Computer Interaction (HCI) - User Experience Research - Human-Centered Design - Architecture - Art - Creative Application / Master of Architecture
146

Learning Video Representation from Self-supervision

Chen, Brian January 2023 (has links)
This thesis investigates the problem of learning video representations for video understanding. Previous works have explored the use of data-driven deep learning approaches, which have been shown to be effective in learning useful video representations. However, obtaining large amounts of labeled data can be costly and time-consuming. We investigate self-supervised approach as for multimodal video data to overcome this challenge. Video data typically contains multiple modalities, such as visual, audio, transcribed speech, and textual captions, which can serve as pseudo-labels for representation learning without needing manual labeling. By utilizing these modalities, we can train deep representations over large-scale video data consisting of millions of video clips collected from the internet. We demonstrate the scalability benefits of multimodal self-supervision by achieving new state-of-the-art performance in various domains, including video action recognition, text-to-video retrieval, and text-to-video grounding. We also examine the limitations of these approaches, which often rely on the association assumption involving multiple modalities of data used in self-supervision. For example, the text transcript is often assumed to be about the video content, and two segments of the same video share similar semantics. To overcome this problem, we propose new methods for learning video representations with more intelligent sampling strategies to capture samples that share high-level semantics or consistent concepts. The proposed methods include a clustering component to address false negative pairs in multimodal paired contrastive learning, a novel sampling strategy for finding visually groundable video-text pairs, an investigation of object tracking supervision for temporal association, and a new multimodal task for demonstrating the effectiveness of the proposed model. We aim to develop more robust and generalizable video representations for real-world applications, such as human-to-robot interaction and event extraction from large-scale news sources.
147

Low-Cost, Real-Time Face Detection, Tracking and Recognition for Human-Robot Interactions

Zhang, Yan 29 June 2011 (has links)
No description available.
148

Development of a Low-Cost Social Robot for Personalized Human-Robot Interaction

Puehn, Christian G. 03 June 2015 (has links)
No description available.
149

Efficient and Robust Video Understanding for Human-robot Interaction and Detection

Li, Ying 09 October 2018 (has links)
No description available.
150

Visual contributions to spatial perception during a remote navigation task

Eshelman-Haynes, Candace Lee 28 July 2009 (has links)
No description available.

Page generated in 0.1274 seconds