• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 24
  • 18
  • 18
  • 10
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 383
  • 383
  • 311
  • 126
  • 107
  • 69
  • 64
  • 63
  • 57
  • 52
  • 50
  • 49
  • 45
  • 44
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Learning Video Representation from Self-supervision

Chen, Brian January 2023 (has links)
This thesis investigates the problem of learning video representations for video understanding. Previous works have explored the use of data-driven deep learning approaches, which have been shown to be effective in learning useful video representations. However, obtaining large amounts of labeled data can be costly and time-consuming. We investigate self-supervised approach as for multimodal video data to overcome this challenge. Video data typically contains multiple modalities, such as visual, audio, transcribed speech, and textual captions, which can serve as pseudo-labels for representation learning without needing manual labeling. By utilizing these modalities, we can train deep representations over large-scale video data consisting of millions of video clips collected from the internet. We demonstrate the scalability benefits of multimodal self-supervision by achieving new state-of-the-art performance in various domains, including video action recognition, text-to-video retrieval, and text-to-video grounding. We also examine the limitations of these approaches, which often rely on the association assumption involving multiple modalities of data used in self-supervision. For example, the text transcript is often assumed to be about the video content, and two segments of the same video share similar semantics. To overcome this problem, we propose new methods for learning video representations with more intelligent sampling strategies to capture samples that share high-level semantics or consistent concepts. The proposed methods include a clustering component to address false negative pairs in multimodal paired contrastive learning, a novel sampling strategy for finding visually groundable video-text pairs, an investigation of object tracking supervision for temporal association, and a new multimodal task for demonstrating the effectiveness of the proposed model. We aim to develop more robust and generalizable video representations for real-world applications, such as human-to-robot interaction and event extraction from large-scale news sources.
182

Do Autistic Individuals Experience the Uncanny Valley Phenomenon?: The Role of Theory of Mind in Human-Robot Interaction

Jaramillo, Isabella 01 August 2015 (has links)
Theory of Mind (ToM) has repeatedly been defined as the ability to understand that others believe their own things based on their own subjective interpretations and experiences, and that their thoughts are determined independently from your own. In this study, we wanted to see if individual differences in ToM are capable of causing different perceptions of an individual's interactions with human like robotics and highlight whether or not individual differences in ToM account for different levels of how individuals experience what is called the "Uncanny Valley phenomenon" and to see whether or not having a fully developed theory of mind is essential to the perception of the interaction. This was assessed by inquiring whether or not individuals with Autism Spectrum Disorder (ASD) perceive robotics and artificially intelligent technology in the same ways that typically developed individuals do; we focused on the growing use of social robotics in ASD therapies. Studies have indicated that differences of ToM exist between individuals with ASD and those who are typically developed. Comparably, we were also curious to see if differences in empathy levels also accounted for differences in ToM and thus a difference in the perceptions of human like robotics. A robotic image rating survey was administered to a group of University of central Florida students, as well as 2 surveys - the Autism Spectrum Quotient (ASQ) and the Basic Empathy Scale (BES), which helped optimize a measurement for theory of mind. Although the results of this study did not support the claim that individuals with ASD do not experience the uncanny valley differently than typically developed individuals, there were significant enough results to conclude that different levels of empathy may account for individual differences in the uncanny valley. People with low empathy seemed to have experienced less of an uncanny valley feeling, while people with higher recorded empathy showed to experience more of an uncanny valley sensitivity.
183

Is Perceived Intentionality of a Virtual Robot Influenced by the Kinematics?

Sasser, Jordan 01 January 2019 (has links)
Research has shown that in Human-Human Interactions kinematic information reveals that competitive and cooperative intentions are perceivable and suggests the existence of a cooperation bias. The present study invokes the same question in a Human-Robot Interaction by investigating the relationship between the acceleration of a virtual robot within a virtual reality environment and the participants perception of the situation being cooperative or competitive by attempting to identify the social cues used for those perceptions. Five trials, which are mirrored, faster acceleration, slower acceleration, varied acceleration with a loss, and varied acceleration with a win, were experienced by the participant; randomized within two groups of five totaling in ten events. Results suggest that when the virtual robot's acceleration pattern were faster than the participant's acceleration the situation was perceived as more competitive. Additionally, results suggest that while the slower acceleration was perceived as more cooperative, the condition was not significantly different from mirrored acceleration. These results may indicate that there may be some kinematic information found in the faster accelerations that invoke stronger competitive perceptions whereas slower accelerations and mirrored acceleration may blend together during perception; furthermore, the models used in the slower acceleration conditions and the mirrored acceleration provide no single identifiable contributor towards perceived cooperativeness possibly due to a similar cooperative bias. These findings are used as a baseline for understanding movements that can be utilized in the design of better social robotic movements. These movements would improve the interactions between humans and these robots, ultimately improving the robot's ability to help during situations.
184

Low-Cost, Real-Time Face Detection, Tracking and Recognition for Human-Robot Interactions

Zhang, Yan 29 June 2011 (has links)
No description available.
185

Development of a Low-Cost Social Robot for Personalized Human-Robot Interaction

Puehn, Christian G. 03 June 2015 (has links)
No description available.
186

Efficient and Robust Video Understanding for Human-robot Interaction and Detection

Li, Ying 09 October 2018 (has links)
No description available.
187

Visual contributions to spatial perception during a remote navigation task

Eshelman-Haynes, Candace Lee 28 July 2009 (has links)
No description available.
188

Human-Robot Interactive Control

Jou, Yung-Tsan January 2003 (has links)
No description available.
189

Adapting the backchanneling behaviour of a social robot to increase user engagement : A study using social robots with contingent backchanneling behaviour / Adaptiv generering av stödsignaler hos en social robot för att öka användarengagemang

Kazzi, Daniel Alexander, Winberg, VIncent January 2022 (has links)
There are many aspects of human communication that affects the nature of an interaction; examples include voice intonation and facial expressions. A particular type of verbal and non-verbal cues, so called backchannels, have an underlying role in shaping conversations. In this study we analyse how backchannels can affect engagement of two participants engaged in a task with a social robot. Furthermore, given the ever increasing interest in using social robots in service contexts, we analyse the current level of customer acceptance for social robots and which aspects the participants think is important when interacting with one in a service setting using interviews. The social robot produces contingent backchannels based on engagement levels to increase the participation of the least speaking participant. An interview was conducted after the experiment to analyse the participants attitudes towards the use of social robots in services. 40 people participated in pairs of two, where each pair was assigned to either the experimental or the control condition. In the experimental condition the backchannels were targeted towards the least dominant speaker and in the control setting the backchannels were randomly generated. Each pair consisted of one native speaker and one language learner of Swedish. The results showed that in the experimental condition the least dominant speaker increased their speech, as well as evening out the participation. The interviews showed mixed attitudes towards the use of social robots in service with some expressing hesitancy regarding the robots ability to understand speaker’s desire. / Det finns många aspekter i kommunikation mellan människor som har inverkan över hur en konversation uppfattas, exempelvis röstläge och ansiktsuttryck. En samling av dessa verbala och icke-verbala signaler kallas för stödsignaler och fyller en viktig roll i konversationer. Denna rapport undersöker hur adaptiv generering av dessa signaler hos en social robot kan öka engagemang hos två deltagare som spelar ett språkinlärningsspel tillsammans med en denna robot. Givet ett allt större intresse för att tillämpa sociala robotar i en tjänstemiljö kommer denna rapport även undersöka nuvarande attityder gällande detta hos deltagarna. Spelet spelades på svenska och de två deltagarna bestod av en modersmålstalare och en icke-modersmålstalare. Efter experimenten utfördes en kort intervju för att undersöka deltagarnas attityd gällande användningen av sociala robotar i tjänstesammanhang. Totalt deltog 40 deltagare uppdelade i 20 olika par, där varje grupp antingen spelade spelet under det experimentella förhållandet, där s.k. stödsignaler var riktade mot den deltagare som talade minst, eller kontrollförhållandet där stödsignalerna genererades slumpmässigt. Resultaten visade på att i det experimentella förhållandet så ökade den totala taltiden hos den talare med lägre språkfärdighet jämfört med under kontrollförhållandet, samtidigt som konversationen också blev jämnare mellan de två deltagarna under det experimentella förhållandet. Intervjuerna visade också över lag en positiv attityd gentemot användning av robotar i tjänstemiljöer, dock rådde det en viss skepsis då vissa var oroliga inför att roboten inte skulle förstå vad man sade.
190

Intent Recognition Of Rotation Versus Translation Movements In Human-Robot Collaborative Manipulation Tasks

Nguyen, Vinh Q 07 November 2016 (has links) (PDF)
The goal of this thesis is to enable a robot to actively collaborate with a person to move an object in an efficient, smooth and robust manner. For a robot to actively assist a person it is key that the robot recognizes the actions or phases of a collaborative tasks. This requires the robot to have the ability to estimate a person’s movement intent. A hurdle in collaboratively moving an object is determining whether the partner is trying to rotate or translate the object (the rotation versus translation problem). In this thesis, Hidden Markov Models (HMM) are used to recognize human intent of rotation or translation in real-time. Based on this recognition, an appropriate impedance control mode is selected to assist the person. The approach is tested on a seven degree-of-freedom industrial robot, KUKA LBR iiwa 14 R820, working with a human partner during manipulation tasks. Results show the HMMs can estimate human intent with accuracy of 87.5% by using only haptic data recorded from the robot. Integrated with impedance control, the robot is able to collaborate smoothly and efficiently with a person during the manipulation tasks. The HMMs are compared with a switching function based approach that uses interaction force magnitudes to recognize rotation versus translation. The results show that HMMs can predict correctly when fast rotation or slow translation is desired, whereas the switching function based on force magnitudes performs poorly.

Page generated in 0.0288 seconds