This paper investigates the conversational capabilities of voice-controlled virtual assistants with respect to biased questions and answers. Three commercial virtual assistants (Google Assistant, Alexa and Siri) are tested for the presence of three cognitive biases (wording, framing and confirmation) in the answers given. The results show that all assistants are susceptible to wording and framing biases to varying degrees, and have limited ability to recognise questions designed to induce cognitive biases. The paper describes the different response strategies available to voice user interfaces, the differences between them, and discusses the role of strategy in relation to biased content.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:mdh-64152 |
Date | January 2023 |
Creators | Khofman, Anna |
Publisher | Mälardalens universitet, Akademin för innovation, design och teknik |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.002 seconds