Explainable AI (XAI) has grown as an important field over the years. As more complicated AI systems are utilised in decision-making situations, the necessity for explanations for such systems is also increasing in order to ensure transparency and stakeholder trust. This study focuses on a specific type of explanation method, namely counterfactual explanations. Counterfactual explanations provide feedback that outlines what changes should be made to the input to reach a different outcome. This study expands on a previous dissertation in which a proof-of-concept tool was created for comparing several counterfactual explanation methods. This thesis investigates the properties of counterfactual explanation methods along with some appropriate metrics. The identified metrics are then used to evaluate and compare the desirable properties of the counterfactual approaches. The proof-of-concept tool is extended with a properties-metrics mapping module, and a user preference-based system is developed, allowing users to evaluate different counterfactual approaches depending on their preferences. This addition to the proof-of-concept tool is a critical step in providing field researchers with a standardised benchmarking tool.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-514290 |
Date | January 2023 |
Creators | Akram, Muhammad Zain |
Publisher | Uppsala universitet, Datalogi |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Relation | UPTEC IT, 1401-5749 ; 23037 |
Page generated in 0.002 seconds