Return to search

Human technology interaction: Financial decision making and delegation to algorithms

This doctoral thesis consists of three essays within the field of human technology interaction examined through the lens of behavioural and experimental economics. The three essays in this thesis represent three strands helping to reveal the issue of human-machine interaction from different angles. The first essay contributes to human-machine relations by addressing the problem associated with the problem of an individual experiencing a relative lack of resources that affects human judgment and decision-making in the financial domain. This chapter discusses how policy can leverage emerging technologies to design specific choice architecture that may support more risk-aware decision-making of vulnerable socioeconomic groups. Furthermore, it discusses how behavioural policy initiatives aimed at helping resource-deprived individuals conduct more optimal financial decision making might be effectively assisted by recent Artificial Intelligence (AI) developments and the associated ethical considerations. The primary focus of the second essay relates to individual decision making in a risky environment with algorithm help. By conducting an online experiment, it investigates how humans cognitively offload tasks to algorithms in a risky environment with different time constraints. Results demonstrate that the presence of an AI assistant is beneficial for decision making only when its accuracy is high. The third essay continues the investigation of human-technology inter- actions. The primary attention is paid to how information about the result of the action taken by a human affects the incentive behaviour, depending on the interacting partner. The main focus concerns how the information about the result (out- come) of the investment affects the reward and punishment behaviour of the participants that interact with Human and Algorithm agents. Specifically, I conduct an experiment investigating the interaction between out- come bias and human/algorithm responsibility.

Identiferoai:union.ndltd.org:unitn.it/oai:iris.unitn.it:11572/382269
Date06 July 2023
CreatorsIsmagilova, Zilia
ContributorsIsmagilova, Zilia, Ploner, Matteo
PublisherUniversità degli studi di Trento, place:TRENTO
Source SetsUniversità di Trento
LanguageEnglish
Detected LanguageEnglish
Typeinfo:eu-repo/semantics/doctoralThesis
Rightsinfo:eu-repo/semantics/openAccess
Relationfirstpage:1, lastpage:126, numberofpages:126

Page generated in 0.0017 seconds