• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cloning with gesture expressivity / Clonage gestuel expressif

Rajagopal, Manoj Kumar 11 May 2012 (has links)
Les environnements virtuels permettent de représenter des personnes par des humains virtuels ou avatars. Le sentiment de présence virtuelle entre utilisateurs est renforcé lorsque l’avatar ressemble à la personne qu’il représente. L’avatar est alors classiquement un clone de l’utilisateur qui reproduit son apparence et sa voix. Toutefois, la possibilité de cloner l’expressivité des gestes d’une personne a reçu peu d’attention jusqu’ici. Expressivité gestuelle combine le style et l’humeur d’une personne. Des paramètres décrivant l’expressivité ont été proposés dans des travaux antérieurs pour animer les agents conversationnels. Dans ce travail, nous nous intéressons à l’expressivité des mouvements du poignet. Tout d’abord, nous proposons des algorithmes pour estimer trois paramètres d’expressivité à partir des trajectoires dans l’espace du poignet : la répétition, l’étendue spatiale et l’étendue temporelle. Puis, nous avons mené une étude perceptive sur la pertinence de l’expressivité des gestes pour reconnaître des personnes. Nous avons animé un agent virtuel en utilisant l’expressivité estimée de personnes réelles, et évalué si des utilisateurs peuvent reconnaître ces personnes à partir des animations. Nous avons constaté que des gestes répétitifs dans l’animation constituent une caractéristique discriminante pour reconnaître les personnes, tandis que l’absence de répétition est associée à des personnes qui répètent des gestes ou non. Plus important, nous avons trouvé que 75% ou plus des utilisateurs peuvent reconnaître une personne (parmi deux proposée) à partir d’animations virtuelles qui ne diffèrent que par leurs étendues spatiales et temporelles. L’expressivité gestuelle apparaît donc comme un nouvel indice pertinent pour le clonage d’une personne / Virtual environments allow human beings to be represented by virtual humans or avatars. Users can share a sense of virtual presence is the avatar looks like the real human it represents. This classically involves turning the avatar into a clone with the real human’s appearance and voice. However, the possibility of cloning the gesture expressivity of a real person has received little attention so far. Gesture expressivity combines the style and mood of a person. Expressivity parameters have been defined in earlier works for animating embodied conversational agents.In this work, we focus on expressivity in wrist motion. First, we propose algorithms to estimate three expressivity parameters from captured wrist 3D trajectories: repetition, spatial extent and temporal extent. Then, we conducted perceptual study through a user survey the relevance of expressivity for recognizing individual human. We have animated a virtual agent using the expressivity estimated from individual humans, and users have been asked whether they can recognize the individual human behind each animation. We found that, in case gestures are repeated in the animation, this is perceived by users as a discriminative feature to recognize humans, while the absence of repetition would be matched with any human, regardless whether they repeat gesture or not. More importantly, we found that 75 % or more of users could recognize the real human (out of two proposed) from an animated virtual avatar based only on the spatial and temporal extents. Consequently, gesture expressivity is a relevant clue for cloning. It can be used as another element in the development of a virtual clone that represents a person
2

Cloning with gesture expressivity

Rajagopal, Manoj Kumar 11 May 2012 (has links) (PDF)
Virtual environments allow human beings to be represented by virtual humans or avatars. Users can share a sense of virtual presence is the avatar looks like the real human it represents. This classically involves turning the avatar into a clone with the real human's appearance and voice. However, the possibility of cloning the gesture expressivity of a real person has received little attention so far. Gesture expressivity combines the style and mood of a person. Expressivity parameters have been defined in earlier works for animating embodied conversational agents.In this work, we focus on expressivity in wrist motion. First, we propose algorithms to estimate three expressivity parameters from captured wrist 3D trajectories: repetition, spatial extent and temporal extent. Then, we conducted perceptual study through a user survey the relevance of expressivity for recognizing individual human. We have animated a virtual agent using the expressivity estimated from individual humans, and users have been asked whether they can recognize the individual human behind each animation. We found that, in case gestures are repeated in the animation, this is perceived by users as a discriminative feature to recognize humans, while the absence of repetition would be matched with any human, regardless whether they repeat gesture or not. More importantly, we found that 75 % or more of users could recognize the real human (out of two proposed) from an animated virtual avatar based only on the spatial and temporal extents. Consequently, gesture expressivity is a relevant clue for cloning. It can be used as another element in the development of a virtual clone that represents a person
3

Evaluating Generated Co-Speech Gestures of Embodied Conversational Agent(ECA) through Real-Time Interaction / Utvärdering av genererade samspråkliga gester hos Embodied Conversational Agent (ECA) genom interaktion i realtid

He, Yuan January 2022 (has links)
Embodied Conversational Agents (ECAs)’ gestures can enhance human perception in many dimensions during interactions. In recent years, data-driven gesture generation approaches for ECAs have attracted considerable research attention and effort, and methods have been continuously optimized. Researchers have typically used human-agent interaction for user studies when evaluating systems of ECAs that generate rule-based gestures. However, when evaluating the performance of ECAs that generate gestures based on data-driven methods, participants are often required to watch prerecorded videos, which cannot provide an adequate assessment of human perception during the interaction. To address this limitation, we proposed two main research objectives: First, to explore the workflow of assessing data-driven gesturing ECAs through real-time interaction. Second, to investigate whether gestures could affect ECAs’ human-likeness, animacy, perceived intelligence, and humans’ focused attention in ECAs. Our user study required participants to interact with two ECAs by setting two experimental conditions with and without hand gestures. Both subjective data from the participants’ self-report questionnaire and objective data from the gaze tracker were collected. To our knowledge, the current study represents the first attempt to evaluate data-driven gesturing ECAs through real-time interaction and the first experiment using gaze-tracking to examine the effect of ECA gestures. The eye-gazing data indicated that when an ECA can generate gestures, it would attract more attention to its body. / Förkroppsligade konversationsagenter (Embodied Conversational Agents, ECAs) gester kan förbättra människans uppfattning i många dimensioner under interaktioner. Under de senaste åren har datadrivna metoder för att generera gester för ECA:er fått stor uppmärksamhet och stora ansträngningar inom forskningen, och metoderna har kontinuerligt optimerats. Forskare har vanligtvis använt sig av interaktion mellan människa och agent för användarstudier när de utvärderat system för ECA:er som genererar regelbaserade gester. När man utvärderar prestandan hos ECA:er som genererar gester baserat på datadrivna metoder måste deltagarna ofta titta på förinspelade videor, vilket inte ger en adekvat bedömning av människans uppfattning under interaktionen. För att åtgärda denna begränsning föreslog vi två huvudsakliga forskningsmål: För det första att utforska arbetsflödet för att bedöma datadrivna ECA:er för gester genom interaktion i realtid. För det andra att undersöka om gester kan påverka ECA:s människoliknande, animerade karaktär, upplevd intelligens och människors fokuserade uppmärksamhet i ECA:s. I vår användarstudie fick deltagarna interagera med två ECA:er genom att ställa in två experimentella villkor med och utan handgester. Både subjektiva data från deltagarnas självrapporterande frågeformulär och objektiva data från gaze tracker samlades in. Såvitt vi vet är den aktuella studien det första försöket att utvärdera datadrivna ECA:er med gester genom interaktion i realtid och det första experimentet där man använder blickspårning för att undersöka effekten av ECA:s gester. Uppgifterna om blickspårning visade att när en ECA kan generera gester skulle den locka mer uppmärksamhet till sin kropp.

Page generated in 0.1621 seconds