Return to search

Impact of counterfactual emotions on the experience of algorithm aversion

Today more and more algorithms and their applications are entering into the everyday life of each of us. Algorithms can help people to make more effective choices through historical data analysis, generating predictions to present to the user in the form of advice and suggestions. Given the increasing popularity of these suggestions, a greater understanding of how people could increase their judgment through the suggestions presented is needed, in order to improve the interface design of these applications.
Since the envision of Artificial Intelligence (AI), technical progress has the intent of surpassing human performance and abilities (Crandall et al., 2018). Less consideration has been given to improve cooperative relationships between human agents and computer agents during decision tasks.
No study up to date has investigated the negative emotions that could arise from a bad outcome after following the suggestion given by an intelligent system, and how to cope with the potential distrust that could affect the long-term use of the system.
According to Zeelenberg et al. (Martinez & Zeelenberg, 2015; Martinez, Zeelenberg, & Rijsman, 2011a; Zeelenberg & Pieters, 1999), there are two emotions strongly related to wrong decisions, regret, and disappointment. The objective of this research is to understand the different effects of disappointment and regret on participants’ behavioral responses to failed suggestions given by algorithm-based systems.
The research investigates how people deal with a computer suggestion that brings to a not satisfying result, compared to a human suggestion. To achieve this purpose, three different scenarios were tested in three different experiments.
In the first experiment, the comparison was amongst two wrong suggestions in a between-subjects design through the presentation of a flight ticket scenario with two tasks. The first study analyzed exploratory models that explain the involvement of the source of suggestion and the trust in the systems in the experience of counterfactual emotions and responsibility attribution.
The second experiment takes advantage of a typical purchase scenario, already used in the psychological literature, which had the aim to solve the issues found in the first study and test the algorithm aversion paradigm through the lenses of a classic study of regret literature. Results showed that, contrary to early predictions, people blame more the source of the suggestion when it comes from a human as compared with an intelligent computer suggestion.
The third study had the aim to understand the role of counterfactuals through a paradigmatic experiment from algorithm aversion literature. In this study, the main finding is about the reliance people have on the algorithmic suggestion, which is higher compared to the reliance they have with a human suggestion. Nevertheless, people felt more guilt when they had a wrong outcome with a computer compared with a suggestion given by a person.
Results are relevant in order to better understand how people decide and trust algorithm-based systems after a wrong outcome. This thesis is the first attempt to understand this algorithm aversion from the experienced counterfactual emotions and their different behavioral consequences. However, some of these findings showed contradictory results in the three experiments; this could be due to the different scenarios and participants’ thoughts and perceptions of artificial intelligence-based systems. From this work, three suggestions can be inferred to help designers of intelligent systems. The first regards the effective involvement of counterfactuals during the user interaction with a wrong outcome and the potential behavioral consequences that could affect the future use of the intelligent system. The second suggestion is the contribution to the importance of the context in which decisions are made, and the third guideline suggests the designer rethink about anthropomorphism as the best practice to present suggestions in the occurrence of potential wrong outcomes.
Future works will investigate, in a more detailed way the perceptions of users and test different scenarios and decision domains.

Identiferoai:union.ndltd.org:unitn.it/oai:iris.unitn.it:11572/252452
Date13 February 2020
CreatorsBeretta, Andrea
ContributorsLepri, Bruno, Beretta, Andrea, Zancanaro, Massimo
PublisherUniversità degli studi di Trento, place:Trento
Source SetsUniversità di Trento
LanguageEnglish
Detected LanguageEnglish
Typeinfo:eu-repo/semantics/doctoralThesis
Rightsinfo:eu-repo/semantics/openAccess
Relationfirstpage:1, lastpage:107, numberofpages:107, alleditors:Lepri, Bruno

Page generated in 0.0018 seconds