In the past few years, Artificial Intelligence (AI) has evolved into a powerful tool applied in multi-disciplinary fields to resolve sophisticated problems. As AI becomes more powerful and ubiquitous, oftentimes the AI methods also become opaque, which might lead to trust issues for the users of the AI systems as well as fail to meet the legal requirements of AI transparency. In this report, the possibility of making a credit-card fraud detection support system explainable to users is investigated through a quantitative survey. A publicly available credit card dataset was used. Deep Learning and Random Forest were the two Machine Learning (ML) methodsimplemented and applied on the credit card fraud dataset, and the performance of their results was evaluated in terms of their accuracy, recall, sufficiency, and F1 score. After that, two explainable AI (XAI) methods - SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) were implemented and applied to the results obtained from these two ML methods. Finally, the XAI results were evaluated through a quantitative survey. The results from the survey revealed that the XAI explanations can slightly increase the users' impression of the system's ability to reason and LIME had a slight advantage over SHAP in terms of explainability. Further investigation of visualizing data pre-processing and the training process is suggested to offer deep explanations for users.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:his-20848 |
Date | January 2021 |
Creators | Ji, Yingchao |
Publisher | Högskolan i Skövde, Institutionen för informationsteknologi |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0019 seconds