• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Identifying & Evaluating SystemComponents for Cognitive Trustin AI-Automated Service Encounters : Trusting a Study- & Vocational Chatbot

Eklund, Joakim, Isaksson, Fred January 2019 (has links)
The intensifying idea that AI soon will be a part of our everyday life allows for dreams about the complex relationship we one day could have with non-biological social intelligence. However, establishing societal and individual acceptance of AI-powered autonomy in disciplines built upon to the reliance to human competence raises a number of pressing challenges. One of them being, what system components will engender respectively counteract cognitive trust in socially oriented AI-automated processes?   This masters thesis tackles the seemingly ambiguous concept of trust in automation by identifying and evaluating system components that affect trust in a confined and contextualised setting. Practically, we design, construct and test an AI-powered chatbot, Ava, that contains socially oriented questions and feedback about study- and vocational guidance. Through a comparative study of different system versions, including both quantitative and qualitative data, we contribute to the framework for identifying and evaluating human trust in AI-Automated service encounters. We show how targeted alterations to design choices constituting the system components transparency, unbiasses and system performance, identified to affect trust, has consequences on the perception of the cognitive trust concepts integrity, benevolence and ability. Our results display a way of conduct for practitioners looking to prioritise and develop trustworthy autonomy. More specifically, we account for how cognitive trust is decreased when system opacity is increased. Moreover, we display even more concerning effects on trust due to micking contextual bias in the conversation agent.

Page generated in 0.0511 seconds