Return to search

Exploring cognitive biases in voice-based virtual assistants

This paper investigates the conversational capabilities of voice-controlled virtual assistants with respect to biased questions and answers. Three commercial virtual assistants (Google Assistant, Alexa and Siri) are tested for the presence of three cognitive biases (wording, framing and confirmation) in the answers given. The results show that all assistants are susceptible to wording and framing biases to varying degrees, and have limited ability to recognise questions designed to induce cognitive biases. The paper describes the different response strategies available to voice user interfaces, the differences between them, and discusses the role of strategy in relation to biased content.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:mdh-64152
Date January 2023
CreatorsKhofman, Anna
PublisherMälardalens universitet, Akademin för innovation, design och teknik
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0023 seconds