Ensuring "public trust" in AI seems to be a priority for policymakers and the private sector. It is expected that without public trust, such innovations cannot be implemented with legitimacy, and there is a risk of potential public backlash or resistance (for example cases of Cambridge Analytica, predictive policing, or Clearview AI). There is a rich body of research relating to public trust in data use that suggests that "building public trust" can too often place the burden on the public to be "more trusting" and will do little to address other concerns, including whether trust is a desirable and attainable characteristic of human-AI relation. I argue that there is good reason for the public not to trust AI, especially in the absence of regulatory structures that afford genuine accountability, but at the same time AI can be considered reliable. To that end, the main argument of this paper is 1. We are asked to trust an entity that cannot enter the trust relationship, because it doesn’t fulfil the conditions spelled out by the definitions of trust. 2. We are presented with a misdescription of the agent. Who we trust in fact are developers or policy makers. I also argue that the term "reliance" should be used instead of "trust", as by definition it is more fitting current AI applications. Additionally, the focus should be on framing trust as part of practices expected from AI solution providers, developers and regulators.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:liu-187120 |
Date | January 2022 |
Creators | Janus, Dominika |
Publisher | Linköpings universitet, Institutionen för kultur och samhälle |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.002 seconds