• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 13
  • 12
  • 10
  • 10
  • 8
  • 8
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cognitive aspects of embodied conversational agents

Smith, Cameron G. January 2013 (has links)
Embodied Conversational Agents (ECA) seek to provide a more natural means of interaction for a user through conversation. ECA build on the dialogue abilities of spoken dialogue systems with the provision of a physical or virtual avatar. The rationale for this Thesis is that an ECA should be able to support a form of conversation capable of understanding both the content and affect of the dialogue and providing a meaningful response. The aim is to examine the cognitive aspects of ECA attempting such conversational dialogue in order to augment the abilities of dialogue management. The focus is on the provision of cognitive functions, outside of dialogue control, for managing the relationship with the user including the user’s emotional state. This will include a definition of conversation and an examination of the cognitive mechanisms that underpin meaningful conversation. The scope of this Thesis considers the development of a Companion ECA, the ‘How Was Your Day’ (HWYD) Companion, which enters into an open conversation with the user about the events of their day at work. The HWYD Companion attempts to positively influence the user’s attitude to these events. The main focus of this Thesis is on the Affective Strategy Module (ASM) which will attend to the information covering these events and the user’s emotional state in order to generate a plan for a narrative response. Within this narrative response the ASM will embed a means of influence acting upon the user’s attitude to the events. The HYWD Companion has contributed to the work on ECA through the provision of a system engaging in conversational dialogue including the affective aspects of such dialogue. This supports open conversation with longer utterances than typical task-oriented dialogue systems and can handle user interruptions. The main work of this Thesis provides a major component of this overall contribution and, in addition, provides a specific contribution of its own with the provision of narrative persuasion.
2

Persuasive Embodied Agents: Using Embodied Agents to Change People's Behavior, Beliefs, and Assessments

Pickard, Matthew January 2012 (has links)
Embodied Conversational Agents (i.e., avatars; ECAs) are appearing in increasingly many everyday contexts, such as e-commerce, occupational training, and airport security. Also common to a typical person's daily life is persuasion. Whether being persuaded or persuading, the ability to change another person's attitude or behavior is a thoroughly researched topic. However, little is known about ECAs' ability to persuade and whether basic persuasion principles from human-human interactions will hold in human-ECA interactions. This work investigates this question. First, a broad review of persuasion literature, which serves as an inventory of manipulations to test in ECA contexts, is presented. This literature review serves an inventory to guide future Persuasive ECA work. The ECA literature is then reviewed. Two preliminary studies exploring the effects of physical attractiveness, voice quality, argument quality, common ground, authority, and facial similarity are presented. Finally, the culminating study testing the effectiveness of ECAs to elicit self-disclosure in automated interviewing is presented and discussed. The findings of that automated interviewing study suggest that ECAs may replace humans in automated interviewing contexts. The findings also suggest that ECAs that are manipulated to look like their interviewees are able to induce greater likeability, establish more rapport, and elicited more self-referencing language than ECAs that do not look like the interviewees.
3

Assessment of adoption, usability, and trustability of conversational agents in the diagnosis, treatment, and therapy of individuals with mental illness

Vaidyam, Aditya Nrusimha 18 June 2019 (has links)
INTRODUCTION: Conversational agents are of great interest in the field of mental health, often in the news these days as a solution to the problem of a limited number of clinicians per patient. Until very recently, little research was actually done in patients with mental health conditions, but rather, only in healthy controls. Little is actually known if those with mental health conditions would want to use conversational agents, and how comfortable they might feel hearing results they would normally hear from a clinician, instead from a chatbot. OBJECTIVES: We asked patients with mental health conditions to ask a chatbot to read a results document to them and tell us how they found the experience. To our knowledge, this is one of the earliest studies to consider actual patient perspectives on conversational agents for mental health, and would inform whether this avenue of research is worth pursuing in the future. Our specific aims are to first and foremost determine the usability of such conversational agent tools, second, to determine their likely adoption among individuals with mental health disorders, and third, to determine whether those using them would grow a sense of artificial trust with the agent. METHODS: We designed and implemented a conversational agent specific to mental health tracking along with a supporting scale able to measure its efficacy in the selected domains of Adoption, Usability, and Trust. These specific domains were selected based on the phases of interaction during a conversation that patients would have with a conversational agent and adapted for simplicity in measurement. Patients were briefly introduced to the technology, our particular conversational agent, and a demo, before using it themselves and taking the survey with the supporting scale thereafter. RESULTS: With a mean score of 3.27 and standard deviation of 0.99 in the Adoption domain, we see that subjects typically felt less than content with adoption but believe that the conversational agent could become commonplace without complicated technical hurdles. With a mean score of 3.4 and standard deviation of 0.93 in the Usability domain, we see that subjects tended to feel more content with the usability of the conversational agent. With a mean score of 2.65 and standard deviation of 0.95 in the Trust domain, we see that subjects felt least content with trusting the conversational agent. CONCLUSIONS: In summary, though conversational agents are now readily accessible and relatively easy to use, we see there is a bridge to be crossed before patients are willing to trust a conversational agent over speaking directly with a clinician in mental health settings. With increased attention, clinic adoption, and patient experience, however, we feel that conversational agents could be readily adopted for simple or routine tasks and requesting information that would otherwise require time, cost, and effort to acquire. The field is still young, however, and with advances in digital technologies and artificial intelligence, capturing the essence of natural language conversation could transform this currently simple tool with limited use-cases into a powerful one for the digital clinician.
4

Cloning with gesture expressivity / Clonage gestuel expressif

Rajagopal, Manoj Kumar 11 May 2012 (has links)
Les environnements virtuels permettent de représenter des personnes par des humains virtuels ou avatars. Le sentiment de présence virtuelle entre utilisateurs est renforcé lorsque l’avatar ressemble à la personne qu’il représente. L’avatar est alors classiquement un clone de l’utilisateur qui reproduit son apparence et sa voix. Toutefois, la possibilité de cloner l’expressivité des gestes d’une personne a reçu peu d’attention jusqu’ici. Expressivité gestuelle combine le style et l’humeur d’une personne. Des paramètres décrivant l’expressivité ont été proposés dans des travaux antérieurs pour animer les agents conversationnels. Dans ce travail, nous nous intéressons à l’expressivité des mouvements du poignet. Tout d’abord, nous proposons des algorithmes pour estimer trois paramètres d’expressivité à partir des trajectoires dans l’espace du poignet : la répétition, l’étendue spatiale et l’étendue temporelle. Puis, nous avons mené une étude perceptive sur la pertinence de l’expressivité des gestes pour reconnaître des personnes. Nous avons animé un agent virtuel en utilisant l’expressivité estimée de personnes réelles, et évalué si des utilisateurs peuvent reconnaître ces personnes à partir des animations. Nous avons constaté que des gestes répétitifs dans l’animation constituent une caractéristique discriminante pour reconnaître les personnes, tandis que l’absence de répétition est associée à des personnes qui répètent des gestes ou non. Plus important, nous avons trouvé que 75% ou plus des utilisateurs peuvent reconnaître une personne (parmi deux proposée) à partir d’animations virtuelles qui ne diffèrent que par leurs étendues spatiales et temporelles. L’expressivité gestuelle apparaît donc comme un nouvel indice pertinent pour le clonage d’une personne / Virtual environments allow human beings to be represented by virtual humans or avatars. Users can share a sense of virtual presence is the avatar looks like the real human it represents. This classically involves turning the avatar into a clone with the real human’s appearance and voice. However, the possibility of cloning the gesture expressivity of a real person has received little attention so far. Gesture expressivity combines the style and mood of a person. Expressivity parameters have been defined in earlier works for animating embodied conversational agents.In this work, we focus on expressivity in wrist motion. First, we propose algorithms to estimate three expressivity parameters from captured wrist 3D trajectories: repetition, spatial extent and temporal extent. Then, we conducted perceptual study through a user survey the relevance of expressivity for recognizing individual human. We have animated a virtual agent using the expressivity estimated from individual humans, and users have been asked whether they can recognize the individual human behind each animation. We found that, in case gestures are repeated in the animation, this is perceived by users as a discriminative feature to recognize humans, while the absence of repetition would be matched with any human, regardless whether they repeat gesture or not. More importantly, we found that 75 % or more of users could recognize the real human (out of two proposed) from an animated virtual avatar based only on the spatial and temporal extents. Consequently, gesture expressivity is a relevant clue for cloning. It can be used as another element in the development of a virtual clone that represents a person
5

California State University, San Bernardino Chatbot

Desai, Krutarth 01 December 2018 (has links)
Now-a-days the chatbot development has been moving from the field of Artificial-Intelligence labs to the desktops and mobile domain experts. In the fastest growing technology world, most smartphone users spend major time in the messaging apps such as Facebook messenger. A chatbot is a computer program that uses messaging channels to interact with users using natural Languages. Chatbot uses appropriate mapping techniques to transform user inputs into a relational database and fetch the data by calling an existing API and then sends an appropriate response to the user to drive its chats. Drawbacks include the need to learn and use chatbot specific languages such as AIML (Artificial Intelligence Markup Language), high botmaster interference, and the use of non-matured technology. In this project, Facebook messenger based chatbot is proposed to provide domain independent, an easy to use, smart, scalable, dynamic and conversational agent in order to get information about CSUSB. It has the unique functionalities which identify user interactions made by their natural language, and the flawless support of various application domains. This provides an ample of unique scalabilities and abilities that will be evaluated in the future phases of this project.
6

Dynamic Interviewing Agents: Effects on Deception, Nonverbal Behavior, and Social Desirability

Schuetzler, Ryan M. January 2015 (has links)
Virtual humans and other virtual agents are becoming more common in our everyday lives. Whether in the form of phone-based personal assistants or automated customer service systems, these technologies have begun to touch more of our activities. This research aims to understand how this technology affects the way we interact with our computer systems. Using a chat bot, I studied the way a conversational computer system affects the way people interact with and perceive automated interviewing systems in two different contexts. Study 1 examines the impact of a conversational agent on behavior during deception. It found that a conversational agent can have a powerful impact on people's perception of the system, resulting in individuals viewing the system as much more engaging and human. The conversational agent further results in a suppression of deception-related cues consistent with a more human-like interaction. Study 2 focuses on the effect of a conversational agent on socially desirable responding. Results of this study indicate that a conversational agent increases social desirability when the topic of the interview is sensitive, but has no effect when the questions are non-sensitive. The results of these two studies indicate that a conversational agent can change the way people interact with a computer system in substantial and meaningful ways. These studies represent a step toward understanding how conversational agents can shape the way we view and interact with computers.
7

Cloning with gesture expressivity

Rajagopal, Manoj Kumar 11 May 2012 (has links) (PDF)
Virtual environments allow human beings to be represented by virtual humans or avatars. Users can share a sense of virtual presence is the avatar looks like the real human it represents. This classically involves turning the avatar into a clone with the real human's appearance and voice. However, the possibility of cloning the gesture expressivity of a real person has received little attention so far. Gesture expressivity combines the style and mood of a person. Expressivity parameters have been defined in earlier works for animating embodied conversational agents.In this work, we focus on expressivity in wrist motion. First, we propose algorithms to estimate three expressivity parameters from captured wrist 3D trajectories: repetition, spatial extent and temporal extent. Then, we conducted perceptual study through a user survey the relevance of expressivity for recognizing individual human. We have animated a virtual agent using the expressivity estimated from individual humans, and users have been asked whether they can recognize the individual human behind each animation. We found that, in case gestures are repeated in the animation, this is perceived by users as a discriminative feature to recognize humans, while the absence of repetition would be matched with any human, regardless whether they repeat gesture or not. More importantly, we found that 75 % or more of users could recognize the real human (out of two proposed) from an animated virtual avatar based only on the spatial and temporal extents. Consequently, gesture expressivity is a relevant clue for cloning. It can be used as another element in the development of a virtual clone that represents a person
8

Conversational agents in a family context : A qualitative study with children and parents investigating their interactions and worries regarding conversational agents

Horned, Arvid January 2020 (has links)
Conversational agents such as Siri, Google and Alexa are growing in popularity, and Artificial Intelligence in the form of natural language processing utilized by these agents is becoming more available and capable with time. Understanding how conversational agents are used today and what implications it has for our daily lives is important if this trend is going to continue. In this thesis I present how children interact with conversational agents today and the implications this has for families. Four families with children in the age of 6-9 were interviewed regarding how children interact with conversational agents today, what concerns parents have and how they view the agent. The results show that children regard the conversational agent as a tool, and that the primary interactions are entertainment and exploration. Parents were concerned what the agent might say when they are not there, and do not feel in control of the agent. In the beginning children have high expectations on the capabilities of the agent but quickly assess the capabilities through experimentation.
9

Possibilities of Artificial Intelligence in Education : An Assessment of the role of AI chatbots as a communication medium in higher education

Slepankova, Marta January 2021 (has links)
Artificial intelligence has grown in importance in many application areas. However, the application in the education sector is in an embryonic state, where a variety of trials has been conducted. The purpose of this master’s thesis is to investigate the factors that influence the acceptability of AI chatbots by university students in higher education which might point subsequently to the lack of usage. The study also suggests the most appropriate communication areas of AI chatbot application in higher education suggested by students. For this study, the unified theory of Acceptance and Use of technology 2 (UTAUT2) has been compiled with the qualitative data gathered from semi-structured interviews and questionnaire surveys. The study participants are university students from various countries (Sweden, Norway, Finland, Czech Republic). The findings showed three primary constructs: Performance expectancy (PE), Effort expectancy (EE), and a newly proposed construct, Nonjudgmental expectancy (NE), to significantly predict intention to use AI chatbot technology without education intentionality. Students suggested using AI chatbots for recap of course material, study material recommendation, and exam and requirements information.  Furthermore, this study provides a rationale behind AI chatbot acceptability based on students' generation characteristics. The results can guide universities to incorporate innovative solutions into their organization.
10

Percieved benefits and limitations of chatbots in higher education / Uppfattade fördelar och nackdelar av chatbotar i högre utbildning : Uppfattade fördelar och nackdelar av chatbotar i högre utbildning

Lidén, Alexander, Nilros, Karl January 2020 (has links)
Prior to 2012 artificial intelligence, the study of intelligent agents, followed Moore’s law which states that compute is doubling every two years. Post 2012 it has been doubling every 3.4 months. However, intelligent agents are focusing on human language, and conversation is rarely developed for education. This study investigates a student’s perceived benefits and limitations of chatbots in higher education, by exploring the relative advantage, complexity, and compatibility of a different chatbot functionality. By interviewing students the authors could establish four different themes that perceived to be important when using a chatbot, Decreasing obstacles, Enhanced learning process, Hesitance towards complexity, and Teacher involvement. Overall, this study suggests that it is preferable to start with little functionality and then successively improve. Because smaller implementations with basic functionality are more accepted and useful to students compared to complex AI functionality, and for future implementation, this is something that should be accounted for. / Före 2012 följde Artificiell intelligens, läran om intelligenta agenter Moores lag vilket innebär att data beräkningars kraft fördubblas vartannat år. Efter 2012 har det fördubblats var 3,4 månad. Dock utvecklas ofta de intelligenta agenterna med fokus på det mänskliga språket samt hälsa och sällan för utbildnings syfte. Den här studien undersöker studenters uppfattning om fördelar och nackdelar av chatbotar i högre utbildningssyfte genom att utforska relativa fördelar, svårigheter och kompatibiliteten av olika chatbot funktionaliteter. Genom att intervjua studenter kunde författarna etablera fyra olika teman som uppfattades vara viktiga när en chatbot används, att minska hinder, förbättra lärningsprocessen, tvivel gentemot svårigheter och lärarens medverkan. Sammanfattningsvis pekar denna studie på att det är att föredra att börja utveckla chatbotar med med lite funktionalitet och att sedan successivt öka. Detta för att mindre implementationer med grundlig funktionalitet är mer accepterad och användbar för studenter jämfört med komplex AI funktionalitet, och detta är något att ta hänsyn till i framtida implementationer.

Page generated in 0.1427 seconds