• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 1
  • Tagged with
  • 12
  • 12
  • 12
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cognitive aspects of embodied conversational agents

Smith, Cameron G. January 2013 (has links)
Embodied Conversational Agents (ECA) seek to provide a more natural means of interaction for a user through conversation. ECA build on the dialogue abilities of spoken dialogue systems with the provision of a physical or virtual avatar. The rationale for this Thesis is that an ECA should be able to support a form of conversation capable of understanding both the content and affect of the dialogue and providing a meaningful response. The aim is to examine the cognitive aspects of ECA attempting such conversational dialogue in order to augment the abilities of dialogue management. The focus is on the provision of cognitive functions, outside of dialogue control, for managing the relationship with the user including the user’s emotional state. This will include a definition of conversation and an examination of the cognitive mechanisms that underpin meaningful conversation. The scope of this Thesis considers the development of a Companion ECA, the ‘How Was Your Day’ (HWYD) Companion, which enters into an open conversation with the user about the events of their day at work. The HWYD Companion attempts to positively influence the user’s attitude to these events. The main focus of this Thesis is on the Affective Strategy Module (ASM) which will attend to the information covering these events and the user’s emotional state in order to generate a plan for a narrative response. Within this narrative response the ASM will embed a means of influence acting upon the user’s attitude to the events. The HYWD Companion has contributed to the work on ECA through the provision of a system engaging in conversational dialogue including the affective aspects of such dialogue. This supports open conversation with longer utterances than typical task-oriented dialogue systems and can handle user interruptions. The main work of this Thesis provides a major component of this overall contribution and, in addition, provides a specific contribution of its own with the provision of narrative persuasion.
2

Persuasive Embodied Agents: Using Embodied Agents to Change People's Behavior, Beliefs, and Assessments

Pickard, Matthew January 2012 (has links)
Embodied Conversational Agents (i.e., avatars; ECAs) are appearing in increasingly many everyday contexts, such as e-commerce, occupational training, and airport security. Also common to a typical person's daily life is persuasion. Whether being persuaded or persuading, the ability to change another person's attitude or behavior is a thoroughly researched topic. However, little is known about ECAs' ability to persuade and whether basic persuasion principles from human-human interactions will hold in human-ECA interactions. This work investigates this question. First, a broad review of persuasion literature, which serves as an inventory of manipulations to test in ECA contexts, is presented. This literature review serves an inventory to guide future Persuasive ECA work. The ECA literature is then reviewed. Two preliminary studies exploring the effects of physical attractiveness, voice quality, argument quality, common ground, authority, and facial similarity are presented. Finally, the culminating study testing the effectiveness of ECAs to elicit self-disclosure in automated interviewing is presented and discussed. The findings of that automated interviewing study suggest that ECAs may replace humans in automated interviewing contexts. The findings also suggest that ECAs that are manipulated to look like their interviewees are able to induce greater likeability, establish more rapport, and elicited more self-referencing language than ECAs that do not look like the interviewees.
3

Cloning with gesture expressivity / Clonage gestuel expressif

Rajagopal, Manoj Kumar 11 May 2012 (has links)
Les environnements virtuels permettent de représenter des personnes par des humains virtuels ou avatars. Le sentiment de présence virtuelle entre utilisateurs est renforcé lorsque l’avatar ressemble à la personne qu’il représente. L’avatar est alors classiquement un clone de l’utilisateur qui reproduit son apparence et sa voix. Toutefois, la possibilité de cloner l’expressivité des gestes d’une personne a reçu peu d’attention jusqu’ici. Expressivité gestuelle combine le style et l’humeur d’une personne. Des paramètres décrivant l’expressivité ont été proposés dans des travaux antérieurs pour animer les agents conversationnels. Dans ce travail, nous nous intéressons à l’expressivité des mouvements du poignet. Tout d’abord, nous proposons des algorithmes pour estimer trois paramètres d’expressivité à partir des trajectoires dans l’espace du poignet : la répétition, l’étendue spatiale et l’étendue temporelle. Puis, nous avons mené une étude perceptive sur la pertinence de l’expressivité des gestes pour reconnaître des personnes. Nous avons animé un agent virtuel en utilisant l’expressivité estimée de personnes réelles, et évalué si des utilisateurs peuvent reconnaître ces personnes à partir des animations. Nous avons constaté que des gestes répétitifs dans l’animation constituent une caractéristique discriminante pour reconnaître les personnes, tandis que l’absence de répétition est associée à des personnes qui répètent des gestes ou non. Plus important, nous avons trouvé que 75% ou plus des utilisateurs peuvent reconnaître une personne (parmi deux proposée) à partir d’animations virtuelles qui ne diffèrent que par leurs étendues spatiales et temporelles. L’expressivité gestuelle apparaît donc comme un nouvel indice pertinent pour le clonage d’une personne / Virtual environments allow human beings to be represented by virtual humans or avatars. Users can share a sense of virtual presence is the avatar looks like the real human it represents. This classically involves turning the avatar into a clone with the real human’s appearance and voice. However, the possibility of cloning the gesture expressivity of a real person has received little attention so far. Gesture expressivity combines the style and mood of a person. Expressivity parameters have been defined in earlier works for animating embodied conversational agents.In this work, we focus on expressivity in wrist motion. First, we propose algorithms to estimate three expressivity parameters from captured wrist 3D trajectories: repetition, spatial extent and temporal extent. Then, we conducted perceptual study through a user survey the relevance of expressivity for recognizing individual human. We have animated a virtual agent using the expressivity estimated from individual humans, and users have been asked whether they can recognize the individual human behind each animation. We found that, in case gestures are repeated in the animation, this is perceived by users as a discriminative feature to recognize humans, while the absence of repetition would be matched with any human, regardless whether they repeat gesture or not. More importantly, we found that 75 % or more of users could recognize the real human (out of two proposed) from an animated virtual avatar based only on the spatial and temporal extents. Consequently, gesture expressivity is a relevant clue for cloning. It can be used as another element in the development of a virtual clone that represents a person
4

Cloning with gesture expressivity

Rajagopal, Manoj Kumar 11 May 2012 (has links) (PDF)
Virtual environments allow human beings to be represented by virtual humans or avatars. Users can share a sense of virtual presence is the avatar looks like the real human it represents. This classically involves turning the avatar into a clone with the real human's appearance and voice. However, the possibility of cloning the gesture expressivity of a real person has received little attention so far. Gesture expressivity combines the style and mood of a person. Expressivity parameters have been defined in earlier works for animating embodied conversational agents.In this work, we focus on expressivity in wrist motion. First, we propose algorithms to estimate three expressivity parameters from captured wrist 3D trajectories: repetition, spatial extent and temporal extent. Then, we conducted perceptual study through a user survey the relevance of expressivity for recognizing individual human. We have animated a virtual agent using the expressivity estimated from individual humans, and users have been asked whether they can recognize the individual human behind each animation. We found that, in case gestures are repeated in the animation, this is perceived by users as a discriminative feature to recognize humans, while the absence of repetition would be matched with any human, regardless whether they repeat gesture or not. More importantly, we found that 75 % or more of users could recognize the real human (out of two proposed) from an animated virtual avatar based only on the spatial and temporal extents. Consequently, gesture expressivity is a relevant clue for cloning. It can be used as another element in the development of a virtual clone that represents a person
5

An embodied conversational agent with autistic behaviour

Venter, Wessel Johannes 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: In this thesis we describe the creation of an embodied conversational agent which exhibits the behavioural traits of a child who has Asperger Syndrome. The agent is rule-based, rather than arti cially intelligent, for which we give justi cation. We then describe the design and implementation of the agent, and pay particular attention to the interaction between emotion, personality and social context. A 3D demonstration program shows the typical output to conform to Asperger-like answers, with corresponding emotional responses. / AFRIKAANSE OPSOMMING: In hierdie tesis beskryf ons die ontwerp en implementasie van 'n gestaltegespreksagent wat die gedrag van 'n kind met Asperger se sindroom uitbeeld. Ons regverdig die besluit dat die agent reël-gebaseerd is, eerder as 'n ware skynintelligensie implementasie. Volgende beskryf ons die wisselwerking tussen emosies, persoonlikheid en sosiale konteks en hoe dit inskakel by die ontwerp en implementasie van die agent. 'n 3D demonstrasieprogram toon tipiese ooreenstemmende Asperger-agtige antwoorde op vrae, met gepaardgaande emosionele reaksies.
6

How can I help you? : The delivery of e-government services by means of a digital assistant.

Raoufi, Matthew M. January 2005 (has links)
<p>This thesis focuses on the delivery of government services to citizens with a particula r emphasis on enabling those who are at risk of exclusion from the digital society to gain access to it. Following in this direction, one of the frequently addressed issues concerning the accessibility of e-services and information in government websites is found to be the users’ unawareness concerning the location of the desired information or service. Consequently, the overall objective of the thesis is to find, develop, and assess a solution to provide citizens with a simple and effective means of accessing and utilizing e-government services in government websites. The solution might be seen as a contribution to preventing digital exclusion among citizens caused by difficulties associated with navigation and way-finding, as well as the complexity of the utilization of the digitally provided services.</p><p>To gain this objective, the thesis argues for the use of a digital assistant; i.e. an embodied conversational agent able to provide the user with the desired services or information by means of a dialogue. Influenced by a real life situation, the overall idea behind the use of a digital assistant is that since knowledge about the arrangement of available drugs in a pharmacy and a particular drug's whereabouts in that organization is of little concern to customers, so too should knowledge of the organization and the whereabouts of services and information in a website be of little consequence to users. Therefore, a digital assistant is expected to act in a similar manner to a human agent who possesses knowledge regarding the existing infor mation and services, their application area, and where these are stored.</p><p>In order to realize the defined objective, this thesis is organized to cover all stages of the research process; i.e. the identification of the users’ difficulties in the current sit uation, the development of a prototype of a digital assistant, and the assessment and evaluation of the suggested solution in both a laboratory environment and in a real life situation. The thesis concludes by gathering and placing the results of the conducted studies in to a bigger context to find out whether or not the use of digital assistants improves the delivery of e-government services and the citizen’s utilization of them, and also to clarify and present the research project’s practical and methodological findings. With reference to the findings addressing the identified difficulties in the currents situation, the conducted studies showed that a digital assistant solution, such as the one described in this thesis, offers contributions to the accessibility of e-government services. It was also found that a digital assistant could contribute to the delivery of egovernment services by: a) reducing the technology barrier caused by the traditional input/output technologies, b) reducing the navigation barrier caused by the conventional web design, c) reducing the mental load of the user, and d) adding benefits and subjective pleasing.</p><p>The other interesting finding concerns the contribution of the digital assistant’s “approach” to the development of egovernment services. In other words, going from a traditional design of web interfaces to a digital assistant approach is similar to moving from an inside out perspective to a strictly customer or citizen oriented perspective. This shift in perspective could be recognized as a small revolution and it has many consequences for the development of e-government services compared to the current traditions.</p><p>Taken together, these findings have lead to an improvement in the digital assistant developed in this thesis, and suggest possibilities for future work within this area of research.</p>
7

How can I help you? : The delivery of e-government services by means of a digital assistant.

Raoufi, Matthew M. January 2005 (has links)
This thesis focuses on the delivery of government services to citizens with a particula r emphasis on enabling those who are at risk of exclusion from the digital society to gain access to it. Following in this direction, one of the frequently addressed issues concerning the accessibility of e-services and information in government websites is found to be the users’ unawareness concerning the location of the desired information or service. Consequently, the overall objective of the thesis is to find, develop, and assess a solution to provide citizens with a simple and effective means of accessing and utilizing e-government services in government websites. The solution might be seen as a contribution to preventing digital exclusion among citizens caused by difficulties associated with navigation and way-finding, as well as the complexity of the utilization of the digitally provided services. To gain this objective, the thesis argues for the use of a digital assistant; i.e. an embodied conversational agent able to provide the user with the desired services or information by means of a dialogue. Influenced by a real life situation, the overall idea behind the use of a digital assistant is that since knowledge about the arrangement of available drugs in a pharmacy and a particular drug's whereabouts in that organization is of little concern to customers, so too should knowledge of the organization and the whereabouts of services and information in a website be of little consequence to users. Therefore, a digital assistant is expected to act in a similar manner to a human agent who possesses knowledge regarding the existing infor mation and services, their application area, and where these are stored. In order to realize the defined objective, this thesis is organized to cover all stages of the research process; i.e. the identification of the users’ difficulties in the current sit uation, the development of a prototype of a digital assistant, and the assessment and evaluation of the suggested solution in both a laboratory environment and in a real life situation. The thesis concludes by gathering and placing the results of the conducted studies in to a bigger context to find out whether or not the use of digital assistants improves the delivery of e-government services and the citizen’s utilization of them, and also to clarify and present the research project’s practical and methodological findings. With reference to the findings addressing the identified difficulties in the currents situation, the conducted studies showed that a digital assistant solution, such as the one described in this thesis, offers contributions to the accessibility of e-government services. It was also found that a digital assistant could contribute to the delivery of egovernment services by: a) reducing the technology barrier caused by the traditional input/output technologies, b) reducing the navigation barrier caused by the conventional web design, c) reducing the mental load of the user, and d) adding benefits and subjective pleasing. The other interesting finding concerns the contribution of the digital assistant’s “approach” to the development of egovernment services. In other words, going from a traditional design of web interfaces to a digital assistant approach is similar to moving from an inside out perspective to a strictly customer or citizen oriented perspective. This shift in perspective could be recognized as a small revolution and it has many consequences for the development of e-government services compared to the current traditions. Taken together, these findings have lead to an improvement in the digital assistant developed in this thesis, and suggest possibilities for future work within this area of research.
8

Coordination des tours de parole par le couplage sensorimoteur continu entre utilisateurs et agents / Emergent coordination of speaking turns by the continuous sensory-motor coupling between users and agents

Jégou, Mathieu 05 October 2016 (has links)
Nous présentons dans cette thèse un modèle pour la coordination de la parole dans des interactions dyadiques utilisateur-agent. Selon une approche courante, coordonner la parole reviendrait à éviter les recouvrements de parole et à minimiser les moments de silence entre deux tours, ceci pour rendre plus fluide l’interaction avec l’agent et améliorer l’expérience de l’utilisateur en interaction dialogique avec l’agent. Les interactions humaines montrent néanmoins une coordination plus complexe avec des recouvrements de parole compétitifs ou non compétitifs et des moments de silences longs. Selon notre approche, c’est en permettant cette diversité des situations que nous verrons émerger, entre l’utilisateur et l’agent, une interaction plus fluide et plus crédible, améliorant l’expérience de l’utilisateur avec l’agent. Les échanges de paroles sont néanmoins, par nature, complexes, la coordination se faisant par l’interaction entre locuteur et auditeur plus que par un participant en particulier. Pour capturer cette complexité, nous avons élaboré un modèle mettant l’accent sur une coordination de la parole basée sur un couplage sensorimoteur continu. Sur la base de ce couplage sensorimoteur, le comportement de l’agent n’est pas entièrement contrôlé par ce dernier, mais est émergent de l’interaction entre les participants. Nous montrons la capacité de notre modèle à faire émerger les différentes situations liées à la coordination de la parole humaine à la fois dans une interaction entre deux agents et dans une interaction utilisateur-agent. / In this thesis, we present a model for the coordination of speaking turns in dyadic interactions between users and agents. According to a common view, to coordinate turns means avoiding overlaps and reduces silences between turns. By optimizing turn transitions between users and agents, the user’s experience is expected to be improved. However, observations of human conversations show a more complex coordination of speaking turns between users and agents: awkward silences and overlaps, competitive or not, are common. In order to improve the credibility and the naturalness of the interaction, we must observe the same variability of situations in a user-agent interaction. Nevertheless, coordination of speaking turns is, by nature, complex, the coordination is managed by the interaction between participants more than controlled by one participant alone. To capture this complexity, we elaborated a model emphasizing the continuous sensory-motor coupling existing between the user and the agent. As a result of this sensory-motor coupling, the behavior of the agent is not entirely controlled by the agent but is an emergent property of the interaction between the user and the agent. We show the capacity of our model to make emerge the different situations linked to the coordination of speaking turns in interactions between two agents and between one user and one agent.
9

Evaluating Generated Co-Speech Gestures of Embodied Conversational Agent(ECA) through Real-Time Interaction / Utvärdering av genererade samspråkliga gester hos Embodied Conversational Agent (ECA) genom interaktion i realtid

He, Yuan January 2022 (has links)
Embodied Conversational Agents (ECAs)’ gestures can enhance human perception in many dimensions during interactions. In recent years, data-driven gesture generation approaches for ECAs have attracted considerable research attention and effort, and methods have been continuously optimized. Researchers have typically used human-agent interaction for user studies when evaluating systems of ECAs that generate rule-based gestures. However, when evaluating the performance of ECAs that generate gestures based on data-driven methods, participants are often required to watch prerecorded videos, which cannot provide an adequate assessment of human perception during the interaction. To address this limitation, we proposed two main research objectives: First, to explore the workflow of assessing data-driven gesturing ECAs through real-time interaction. Second, to investigate whether gestures could affect ECAs’ human-likeness, animacy, perceived intelligence, and humans’ focused attention in ECAs. Our user study required participants to interact with two ECAs by setting two experimental conditions with and without hand gestures. Both subjective data from the participants’ self-report questionnaire and objective data from the gaze tracker were collected. To our knowledge, the current study represents the first attempt to evaluate data-driven gesturing ECAs through real-time interaction and the first experiment using gaze-tracking to examine the effect of ECA gestures. The eye-gazing data indicated that when an ECA can generate gestures, it would attract more attention to its body. / Förkroppsligade konversationsagenter (Embodied Conversational Agents, ECAs) gester kan förbättra människans uppfattning i många dimensioner under interaktioner. Under de senaste åren har datadrivna metoder för att generera gester för ECA:er fått stor uppmärksamhet och stora ansträngningar inom forskningen, och metoderna har kontinuerligt optimerats. Forskare har vanligtvis använt sig av interaktion mellan människa och agent för användarstudier när de utvärderat system för ECA:er som genererar regelbaserade gester. När man utvärderar prestandan hos ECA:er som genererar gester baserat på datadrivna metoder måste deltagarna ofta titta på förinspelade videor, vilket inte ger en adekvat bedömning av människans uppfattning under interaktionen. För att åtgärda denna begränsning föreslog vi två huvudsakliga forskningsmål: För det första att utforska arbetsflödet för att bedöma datadrivna ECA:er för gester genom interaktion i realtid. För det andra att undersöka om gester kan påverka ECA:s människoliknande, animerade karaktär, upplevd intelligens och människors fokuserade uppmärksamhet i ECA:s. I vår användarstudie fick deltagarna interagera med två ECA:er genom att ställa in två experimentella villkor med och utan handgester. Både subjektiva data från deltagarnas självrapporterande frågeformulär och objektiva data från gaze tracker samlades in. Såvitt vi vet är den aktuella studien det första försöket att utvärdera datadrivna ECA:er med gester genom interaktion i realtid och det första experimentet där man använder blickspårning för att undersöka effekten av ECA:s gester. Uppgifterna om blickspårning visade att när en ECA kan generera gester skulle den locka mer uppmärksamhet till sin kropp.
10

Modèle statistique de l'animation expressive de la parole et du rire pour un agent conversationnel animé / Data-driven expressive animation model of speech and laughter for an embodied conversational agent

Ding, Yu 26 September 2014 (has links)
Notre objectif est de simuler des comportements multimodaux expressifs pour les agents conversationnels animés ACA. Ceux-ci sont des entités dotées de capacités affectives et communicationnelles; ils ont souvent une apparence humaine. Quand un ACA parle ou rit, il est capable de montrer de façon autonome des comportements multimodaux pour enrichir et compléter son discours prononcé et transmettre des informations qualitatives telles que ses émotions. Notre recherche utilise les modèles d’apprentissage à partir données. Un modèle de génération de comportements multimodaux pour un personnage virtuel parlant avec des émotions différentes a été proposé ainsi qu’un modèle de simulation du comportement de rire sur un ACA. Notre objectif est d'étudier et de développer des générateurs d'animation pour simuler la parole expressive et le rire d’un ACA. En partant de la relation liant prosodie de la parole et comportements multimodaux, notre générateur d'animation prend en entrée les signaux audio prononcés et fournit en sortie des comportements multimodaux. Notre travail vise à utiliser un modèle statistique pour saisir la relation entre les signaux donnés en entrée et les signaux de sortie; puis cette relation est transformée en modèle d’animation 3D. Durant l'étape d’apprentissage, le modèle statistique est entrainé à partir de paramètres communs qui sont composés de paramètres d'entrée et de sortie. La relation entre les signaux d'entrée et de sortie peut être capturée et caractérisée par les paramètres du modèle statistique. Dans l'étape de synthèse, le modèle entrainé est utilisé pour produire des signaux de sortie (expressions faciale, mouvement de tête et du torse) à partir des signaux d'entrée (F0, énergie de la parole ou pseudo-phonème du rire). La relation apprise durant la phase d'apprentissage peut être rendue dans les signaux de sortie. Notre module proposé est basé sur des variantes des modèles de Markov cachés (HMM), appelées HMM contextuels. Ce modèle est capable de capturer la relation entre les mouvements multimodaux et de la parole (ou rire); puis cette relation est rendue par l’animation de l’ACA. / Our aim is to render expressive multimodal behaviors for Embodied conversational agents, ECAs. ECAs are entities endowed with communicative and emotional capabilities; they have human-like appearance. When an ECA is speaking or laughing, it is capable of displaying autonomously behaviors to enrich and complement the uttered speech and to convey qualitative information such as emotion. Our research lies in the data-driven approach. It focuses on generating the multimodal behaviors for a virtual character speaking with different emotions. It is also concerned with simulating laughing behavior on an ECA. Our aim is to study and to develop human-like animation generators for speaking and laughing ECA. On the basis of the relationship linking speech prosody and multimodal behaviors, our animation generator takes as input human uttered audio signals and output multimodal behaviors. Our work focuses on using statistical framework to capture the relationship between the input and the output signals; then this relationship is rendered into synthesized animation. In the training step, the statistical framework is trained based on joint features, which are composed of input and of output features. The relation between input and output signals can be captured and characterized by the parameters of the statistical framework. In the synthesis step, the trained framework is used to produce output signals (facial expression, head and torso movements) from input signals (F0, energy for speech or pseudo-phoneme of laughter). The relation captured in the training phase can be rendered into the output signals. Our proposed module is based on variants of Hidden Markov Model (HMM), called Contextual HMM. This model is capable of capturing the relationship between human motions and speech (or laughter); then such relationship is rendered into the synthesized animations.

Page generated in 0.1193 seconds