• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 47
  • 47
  • 17
  • 15
  • 14
  • 13
  • 13
  • 12
  • 11
  • 11
  • 10
  • 9
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

[pt] EXPLORANDO PROPOSTAS PARA ALINHAR OS MODELOS MENTAIS DE USUÁRIOS E MELHORAR AS INTERAÇÕES COM ASSISTENTES DE VOZ / [en] EXPLORING PROPOSALS TO ALIGN USERS MENTAL MODELS AND IMPROVE INTERACTIONS WITH VOICE ASSISTANTS (VAS)

ISABELA CANELLAS DA MOTTA 28 March 2023 (has links)
[pt] Assistentes de Voz (AVs) trazem diversos benefícios para os usuários e estão se tornando progressivamente populares, mas algumas barreiras para adoção de AVs ainda persistem, como atitudes dos usuários, preocupações com privacidade e percepções negativas desses sistemas. Uma abordagem para mitigar os obstáculos e melhorar as interações pode ser entender os modelos mentais dos usuários de AVs, uma vez que estudos indicam que o entendimento dos usuários não é alinhado com as reais capacidades desses sistemas. Assim, considerando a importância de um modelo mental correto para as interações, explorar fatores geradores de percepções inadequadas e soluções para lidar com tal questão pode ser essencial. O objetivo desta pesquisa foi identificar fatores influentes para as percepções inadequadas de usuários e oferecer recomendações de design para alinhar os modelos mentais de usuários com as reais capacidades desses sistemas. Para alcançar esse objetivo, nós conduzimos uma revisão sistemática de literatura, entrevistas exploratórias com experts e um estudo Delphi de três rodadas com base em questionários. Os resultados indicam que os aspectos de design como a humanização dos AVs e a transparência em respostas do sistema são influentes para os modelos mentais. Apesar desses fatores terem sido indicados como causas para incorreções em modelos mentais, remover a humanização dos AVs e apresentar informações excessivas pode não ser uma solução imediada. Indica-se que designers devem avaliar o contexto de uso e os domínios de tarefa em que os AVs serão usados para guiar as soluções de design. Além disso, os designers devem entender os perfis e backgrounds dos usuários para ajustar as interações uma vez que as características dos usuários são influentes para sua percepção do produto. Finalmente, o time de desenvolvimento deve ter um entendimento correto e homogêneo do AVs, e deve possuir o conhecimento necessário para aplicar soluções corretamente. Esse último requisito é desafiador porque os AVs são produtos relativamente novos e podem demandar que os profissionais dominem novas habilidades e ferramentas. / [en] Voice Assistants (VAs) bring various benefits for users and are increasingly popular, but some barriers for VA adoption and usage still prevail, such as users attitudes, privacy concerns, and negative perceptions towards these systems. An approach to mitigating such obstacles and leveraging voice interactions may be understanding users mental models of VAs, since studies indicate that users understandings of VAs are unaligned with these systems actual capabilities. Thus, considering the importance of a correct mental model for interactions, exploring influential factors causing misperceptions and solutions to deal with this issue may be paramount. The objective of this research was to identify leading causes of users misperceptions and offer design recommendations for aligning users mental models of VAs with these systems real capacities. In order to achieve this goal, we conducted a systematic literature review (SLR), exploratory interviews with experts, and a questionnaire-based three-round Delphi study. The results indicate that design aspects such as VAs high humanness and the lack of outputs transparency are influential for mental models. Despite the indication that these drivers lead to users misperceptions, removing VAs humanness and excessively displaying information about VAs might not be an immediate solution. In turn, developers should assess the context and task domains in which the VA will be used to guide design decisions. Moreover, developers should understand the users profiles and backgrounds to adjust interactions, as users characteristics are influential for how they perceive the product. Finally, developing teams should have a correct and homogeneous understanding of VAs and possess the necessary knowledge to employ solutions properly. This latter requirement is challenging since VAs novelty might demand professionals to master new skills and tools.
42

Virtual Coaches: Background, Theories, and Future Research Directions

Weimann, Thure Georg, Schlieter, Hannes, Brendel, Alfred Benedikt 19 April 2024 (has links)
Digitalization crosses all areas of life (Hess et al. 2014). Recent progress in artificial intelligence (AI) opens new potentials for further developments and improvements, with virtual coaching being a prime example. Virtual coaches (VCs) aim to optimize the user’s life by transforming cognition, affection, and behavior towards a stated goal. Since they emerged from the health and sports domain, a typical example are VCs in the form of digital avatars, which instruct physical exercises, shape health-related knowledge and provide motivational support to achieve the user’s goals (e.g., weight loss) (Ding et al. 2010; Tropea et al. 2019). Nonetheless, the application areas of VCs are versatile and exploring the potential areas (e.g., healthcare, work, finance, leisure, and environment) constitutes an essential topic of future research and development. According to Gartner’s hype cycle for human capital management technology, VCs are still in their infancy but are considered innovation triggers for the following years (Gartner, Inc. 2021). Specifically, VCs can be a replacement or complement for traditional human-to-human coaching scenarios and promise broad access to personalized coaching services independent of place and time (Graßmann and Schermuly 2021). As a result, VCs may contribute to solving challenges posed by an aging society and skilled labor shortage (European Commission 2016; Edwards and Cheok 2018). Last but not least, the recent COVID-19 pandemic additionally showcased the need for VCs as an alternative to traditional face-to-face interventions. Against this background and driven by the potential and promises of VCs, research has recently engaged in developing and understanding VC applications (Tropea et al. 2019; Lete et al. 2020; Graßmann and Schermuly 2021). To introduce the concept in information systems (IS) research and provide a basis for researchers and practitioners alike, this catchword aims at providing a holistic view on VCs. The structure of this paper is as follows. Section 2 elaborates a definition, delimits VCs from related system classes, and proposes a research framework. Section 3 aggregates existing research into the framework and concludes with an outlook on future IS research perspectives.
43

Let’s Quiz?! – Assessing the Learner’s Preferences with a Pedagogical Conversational Agent

Khosrawi-Rad, Bijan, Grogorick, Linda, Keller, Paul Felix, Schlimbach, Ricarda, Di Maria, Marco, Robra-Bissantz, Susanne 18 October 2024 (has links)
Digital Education: Gamification F.1 / From point 1 Introduction: Pedagogical conversational agents (PCAs) are intelligent dialog systems that interact with their users in natural language (Hobert & Meyer von Wolf, 2019). Intelligent dialog systems like ChatGPT are already being used to support users in daily concerns and learning (Kasneci et al., 2023). PCAs can support learners by giving them tips for their study process to improve their self-regulated learning, often relying on artificial intelligence (AI) (Khosrawi-Rad, Schlimbach, Strohmann & Robra-Bissantz, 2022). Self-regulated learning leads to a sense of autonomy among students, fostering their motivation (Young, 2005). Motivation is crucial for learning because its lack is often a reason why students have difficulties in their studies and even drop out (Rinn et al., 2022). To learn self-regulated, learners need to be are aware of their preferences and learning characteristics and apply the appropriate learning techniques and strategies (C. Chen, 2002).
44

Design of a Pedagogical Conversational Agent as Moderator of an Educational Serious Game for Business Model Development

Khosrawi-Rad, Bijan, Hoang, Gia-Huy, Schlimbach, Ricarda, Robra-Bissantz, Susanne 18 October 2024 (has links)
Digital Education: AI (2) G.1 / Pedagogical conversational agents (PCAs) are intelligent dialog systems that communicate with their users in natural language, either text-based (as chatbots) or speech-based (as voice assistants) (Hobert & Meyer von Wolf, 2019). Due to technical advances, they are getting better at processing natural language. ChatGPT, for example, shows that chatbots are already widely used to support everyday life and digital learning. However, users often perceive PCAs as not motivating (Nißen et al., 2021; Wellnhammer, Dolata, Steigler & Schwabe, 2020). According to a recent practice analysis by Janssen, Grützner & Breitner (2021), this is a common reason why learners do not use them, and PCAs fail. Combining PCAs with game-based approaches is one way to counteract this issue (Benner, Schöbel, Süess, Baechle & Janson, 2022; Schöbel, Schmidt-Kraepelin, Janson & Sunyaev, 2021).
45

Timing multimodal turn-taking in human-robot cooperative activity

Chao, Crystal 27 May 2016 (has links)
Turn-taking is a fundamental process that governs social interaction. When humans interact, they naturally take initiative and relinquish control to each other using verbal and nonverbal behavior in a coordinated manner. In contrast, existing approaches for controlling a robot's social behavior do not explicitly model turn-taking, resulting in interaction breakdowns that confuse or frustrate the human and detract from the dyad's cooperative goals. They also lack generality, relying on scripted behavior control that must be designed for each new domain. This thesis seeks to enable robots to cooperate fluently with humans by automatically controlling the timing of multimodal turn-taking. Based on our empirical studies of interaction phenomena, we develop a computational turn-taking model that accounts for multimodal information flow and resource usage in interaction. This model is implemented within a novel behavior generation architecture called CADENCE, the Control Architecture for the Dynamics of Embodied Natural Coordination and Engagement, that controls a robot's speech, gesture, gaze, and manipulation. CADENCE controls turn-taking using a timed Petri net (TPN) representation that integrates resource exchange, interruptible modality execution, and modeling of the human user. We demonstrate progressive developments of CADENCE through multiple domains of autonomous interaction encompassing situated dialogue and collaborative manipulation. We also iteratively evaluate improvements in the system using quantitative metrics of task success, fluency, and balance of control.
46

Enhancing Support for Eating Disorders: Developing a Conversational Agent Integrating Biomedical Insights and Cognitive Behavioral Therapy / Förstärkt stöd för ätstörningar: Utveckling av en konversationsagent som integrerar biomedicinska insikter och kognitiv beteendeterapi

Rehn Hamrin, Josefin January 2024 (has links)
This thesis investigates the application of TrueBalance, a conversational agent designed to support young adults vulnerable to eating disorders (EDs). TrueBalance integrates Cognitive Behavioral Therapy (CBT) techniques with biomedical insights, including genetic and neurobiological factors, to provide a more personalized and scientifically grounded support system. It addresses limitations in existing dietary monitoring tools that usually focus on calorie tracking and food intake, often neglecting the nuanced needs of specific groups like young females and elite athletes, who are particularly vulnerable to EDs and disordered eating behaviors.  The study addresses how biomedical determinants can be integrated into a conversational agent, how these agents can utilize CBT principles to support individuals vulnerable to EDs, and what challenges and opportunities arise from the user’s perspective when using such a dialogue model. The research strives to bridge the gap in current dietary self-monitoring tools by offering a more robust and empathetic support system for individuals struggling with EDs. Through iterative development and user testing, TrueBalance has demonstrated its potential as an engaging educational tool. Feedback from both therapists and users has highlighted the tool’s utility in real-world settings. It has led to suggestions for enhancements in personalizing interactions and making response systems more adaptive. The findings suggest conversational agents like TrueBalance have potential in non-clinical support environments for individuals with EDs and function as a potential informative, supportive tool for therapists’ education. / Denna masteruppsats undersöker användningen av TrueBalance, en konversationsagent designad för att stödja unga vuxna som är sårbara för ätstörningar. TrueBalance integrerar tekniker från Kognitiv beteendeterapi (KBT) med biomedicinska insikter, inklusive genetiska och neurobiologiska faktorer, för att tillhandahålla ett mer personligt och vetenskapligt förankrat stödsystem. Den tar itu med begränsningarna i befintliga verktyg för kostövervakning, som vanligtvis fokuserar på kalorispårning och matintag men ofta förbiser de nyanserade behoven hos specifika grupper, såsom unga kvinnor och elitidrottare, som är särskilt sårbara för ätstörningar och ätstörda beteenden. Studien behandlar hur biomedicinska determinanter kan integreras i en konversationsagent, hur dessa agenter kan använda KBT-principer för att stödja individer sårbara för ätstörningar, samt vilka utmaningar och möjligheter som uppstår från användarens perspektiv när de använder en sådan dialogmodell. Forskningen strävar efter att överbrygga klyftan i nuvarande verktyg för kostövervakning genom att erbjuda ett robustare och mer empatiskt stödsystem för individer som kämpar med ätstörningar. Genom iterativ utveckling och användartester har TrueBalance visat sin potential som ett engagerande pedagogiskt verktyg. Återkoppling från både terapeuter och användare har belyst verktygets nytta i verkliga sammanhang. Det har lett till förslag på förbättringar för att personalisera interaktioner och göra responssystemen mer adaptiva. Resultaten tyder på att konversationsagenter som TrueBalance har potential i icke-kliniska stödmiljöer för individer med ätstörningar och kan fungera som ett potentiellt informativt, stödjande verktyg för terapeuters utbildning.
47

Trustworthy conversational agents for children

Escobar Planas, Marina 19 January 2025 (has links)
[ES] Esta tesis explora el desarrollo de agentes conversacionales (ACs) confiables que puedan interactuar con niños, en alineamiento con la iniciativa de la Comisión Europea para que el desarrollo de la inteligencia artificial (IA) se centre en el ser humano. Los ACs están convirtiéndose en una parte cada vez más integral de la vida de los niños, siendo utilizados en diversos sectores como la educación, el entretenimiento y la atención médica. Sin embargo, los niños, como un grupo de usuarios único, presentan necesidades y desafíos particulares, lo que hace necesario el desarrollo de directrices y evaluaciones rigurosas para asegurar que estos sistemas sean confiables. La investigación comienza examinando los fundamentos de los ACs, enfocándose en su interacción con los niños. Una revisión de la literatura destaca tanto los beneficios como los riesgos asociados con estas tecnologías, subrayando la necesidad de enfoques multidisciplinarios para mitigar posibles preocupaciones éticas. Se introduce el concepto de confiabilidad, así como la falta de investigación existente en el desarrollo ético de los ACs. Para abordar esta laguna, la tesis integra conocimientos de múltiples dominios, combinando la exploración teórica con estudios empíricos. Este enfoque lleva al desarrollo de un conjunto de directrices que abordan las particularidades de la interacción niño-AC, destacando la importancia de la transparencia, el comportamiento adecuado a la edad, el fomento de la conciencia de qué es una IA, la participación de partes interesadas y la gestión de riesgos. Estas directrices se aplican luego en el diseño y desarrollo de un AC colaborativo de narración de cuentos. La confiabilidad del sistema se evalúa utilizando ALTAI y demostrando una mejora en la confiabilidad. Además, dicha evaluación se complementa con un estudio experimental que examina cómo los niños perciben e interactúan con estos sistemas, dando un protagonismo específico a los riesgos como la interpretación errónea de la naturaleza no humana del AC y el comportamiento de los niños en la revelación de datos. Los hallazgos resaltan el impacto positivo de la transparencia y la necesidad de un enfoque más eficaz para aclarar la agencia del AC. Las contribuciones de esta tesis proporcionan tanto ideas teóricas como orientaciones prácticas para el desarrollo de ACs confiables para niños, asegurando que estén alineados éticamente con los valores sociales y los derechos de los niños. Se anima a futuras investigaciones a expandir estos hallazgos a contextos culturales más amplios y a desarrollar herramientas de evaluación más matizadas para evaluar la confiabilidad en los ACs entorno a los niños. / [CA] Aquesta tesi explora el desenvolupament d'agents conversacionals (ACs) confiables que puguen interactuar amb xiquets, en alineació amb la iniciativa de la Comissió Europea per que el desenvolupament de la intel·ligència artificial (IA) se centre en l'ésser humà. Els ACs s'estan convertint en una part cada vegada més integral de la vida dels xiquets, sent utilitzats en diversos sectors com l'educació, l'entreteniment i l'atenció mèdica. No obstant això, els xiquets, com a grup d'usuaris únic, presenten necessitats i desafiaments particulars, el que fa necessari el desenvolupament de directrius i avaluacions rigoroses per a assegurar que aquests sistemes siguen confiables. La investigació comença examinant els fonaments dels ACs, centrant-se en la seua interacció amb els xiquets. Una revisió de la literatura destaca tant els beneficis com els riscos associats amb aquestes tecnologies, subratllant la necessitat d'enfocaments multidisciplinaris per a mitigar possibles preocupacions ètiques. S'introdueix el concepte de confiabilitat, així com la manca d'investigació existent en el desenvolupament ètic dels ACs. Per a abordar aquesta bretxa, la tesi integra coneixements de múltiples dominis, combinant l'exploració teòrica amb estudis empírics. Aquest enfocament porta al desenvolupament d'un conjunt de directrius que aborden les particularitats de la interacció xiquet-AC, destacant la importància de la transparència, el comportament adequat a l'edat, el foment de la consciència de què és una IA, la participació de parts interessades i la gestió de riscos. Aquestes directrius s'apliquen després en el disseny i desenvolupament d'un AC col·laboratiu de narració de contes. La confiabilitat del sistema s'avalua utilitzant ALTAI i demostrant una millora en la confiabilitat. A més, aquesta avaluació es complementa amb un estudi experimental que examina com els xiquets perceben i interactuen amb aquests sistemes, donant un protagonisme específic als riscos com la interpretació errònia de la naturalesa no humana de l'AC i el comportament dels xiquets en la revelació de dades. Els resultats ressalten l'impacte positiu de la transparència i la necessitat d'un enfocament més eficaç per a aclarir l'agència de l'AC. Les contribucions d'aquesta tesi proporcionen tant idees teòriques com orientacions pràctiques per al desenvolupament d'ACs confiables per a xiquets, assegurant que estiguen alineats èticament amb els valors socials i els drets dels xiquets. S'encoratja a futures investigacions a expandir aquests resultats a contextos culturals més amplis i a desenvolupar eines d'avaluació més matisades per a valorar la confiabilitat en els ACs involucrats amb xiquets. / [EN] This thesis explores the development of trustworthy conversational agents (CAs) that may interact with children, aligning with the European Commission's human-centred artificial intelligence (AI) initiative. CAs are becoming increasingly integral to children's lives, used in various sectors such as education, entertainment, and healthcare. However, children, as a unique user group, present particular needs and challenges, necessitating the development of guidelines and rigorous evaluations to ensure that these systems are trustworthy. The research begins by examining the fundamentals of CAs, focusing on their interaction with children. A literature review highlighted both the benefits and risks associated with these technologies, emphasising the need for multidisciplinary approaches to mitigate potential ethical concerns. The concept of trustworthiness is introduced, as well as a significant research gap in CA ethical development. To address this gap, the thesis integrates knowledge from multiple domains, combining theoretical exploration with empirical studies. This approach leads to the development of a set of guidelines that align with the needs of children and CAs, highlighting the importance of transparency, age-appropriate behaviour, AI awareness, stakeholder involvement, and risk management. These guidelines were then applied in the design and development of a collaborative storytelling CA. The system's trustworthiness was assessed using the ALTAI framework, demonstrating an improvement in trustworthiness. This evaluation is further supported by an experimental study, examining how children perceive and interact with these systems, with a specific focus on risks such as misinterpreting the CA's non-human nature and children's data sharing behaviour. Key findings underscored the positive impact of transparency, and the need for a more effective approach to clarify the CA's agency. The contributions of this thesis provide both theoretical insights and practical guidance for developing trustworthy CAs for children, ensuring they are ethically aligned with societal values and children's rights. Future research is encouraged to expand these findings to broader cultural contexts and develop more nuanced evaluation tools for assessing trustworthiness for children. / Escobar Planas, M. (2024). Trustworthy conversational agents for children [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/214061

Page generated in 0.0282 seconds