Explainable AI (XAI) is a research field dedicated to formulating avenues of breaching the black box nature of many of today’s machine learning models. As society finds new ways of applying these models in everyday life, certain risk thresholds are crossed when society replaces human decision making with autonomous systems. How can we trust the algorithms to make sound judgement when all we provide is input and all they provide is an output? XAI methods examine different data points in the machine learning process to determine what factors influenced the decision making. While these methods of post-hoc explanation may provide certain insights, previous studies into XAI have found the designs to often be biased towards the designers and do not incorporate necessary interdisciplinary fields to improve user understanding. In this thesis, we look at animal classification and what features in animal images were found to be important by humans. We use a novel approach of first letting the participants create their own post-hoc explanations, before asking them to evaluate real XAI explanations as well as a pre-made human explanation generated from a test group. The results show strong cohesion in the participants' answers and can provide guidelines for designing XAI explanations more closely related to human reasoning. The data also indicates a preference for human-like explanations within the context of this study. Additionally, a potential bias was identified as participants preferred explanations marking large portions of an image as important, even if many of the important areas coincided with what the participants themselves considered to be unimportant. While the sample pool and data gathering tools are limiting, the results points toward a need for additional research into comparisons of human reasoning and XAI explanations and how it may affect the evaluation of, and bias towards, explanation methods.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:mau-52454 |
Date | January 2022 |
Creators | Helgstrand, Carl Johan, Hultin, Niklas |
Publisher | Malmö universitet, Institutionen för datavetenskap och medieteknik (DVMT) |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0022 seconds