Spelling suggestions: "subject:"latenten++"" "subject:"patenten++""
1 |
An Application of LatentCF++ on Providing Counterfactual Explanations for Fraud DetectionGiannopoulou, Maria-Sofia January 2023 (has links)
The aim of explainable machine learning is to aid humans in understanding how exactly complex machine learning models work. Machine learning models have offered great performance in various areas. However, the mechanisms behind how the model works and how decisions are being made remain unknown. This specific constraint increases the user’s hesitation to trust the results of the model and even to improve their performance further. Counterfactual explanation is one method to offer explainability in machine learning by indicating what would have happened if the input of a model was modified in a specific way. Fraud is the action of acquiring something from someone else in a dishonest manner. Companies’ and organizations’ vulnerability to malicious actions has been increasing due to the development of digitalization. Machine learning applications have been successfully put in place to tackle fraudulent actions. However, the severity of the impact of fraudulent actions has highlighted the need for further scientific exploration of the topic. The current research will attempt to do so by studying counterfactual explanations related to fraud detection. Latent-CF is a method for counterfactual generation that utilizes an autoencoder and gradient descent in its latent space. LatentCF++ is an extension of Latent-CF. It utilizes a classifier and an autoencoder. The aim is to perturb the encoded latent representation through a gradient descent optimization for counterfactual generation so that the initially undesired class is then classified with the desired prediction. Compared to Latent-CF, LatentCF++ uses Adam optimization and adds further constraints to ensure that the generated counterfactual’s class probability surpasses the set decision boundary. The research question the current thesis addresses is: “To what extent can LatentCF++ provide reliable counterfactual explanations in fraud detection?”. In order to provide an answer to this question, the study is applying an experiment to implement a new application of LatentCF++. The current experiment utilizes a onedimensional convolutional neural network as a classifier and a deep autoencoder for counterfactual generation in fraud data. This study reports satisfying results regarding counterfactual explanation production of LatentCF++ on fraud detection. The classification is quite accurate, while the reconstruction loss of the deep autoencoder employed is very low. The validity of the counterfactual examples produced is lower than the original study while the proximity is lower. Compared to baseline models, k-nearest neighbors outperform LatentCF++ in terms of validity and Feature Gradient Descent in terms of proximity.
|
Page generated in 0.0535 seconds