As the complexity of deep learning model increases, the transparency of the systems does the opposite. It may be hard to understand the predictions a deep learning model makes, but even harder to understand why these predictions are made. Using eXplainable AI (XAI), we can gain greater knowledge of how the model operates and how the input in which the model receives can change its predictions. In this thesis, we apply Integrated Gradients (IG), an XAI method primarily used on image data and on datasets containing tabular and time-series data. We also evaluate how the results of IG differ from various types of models and how the change of baseline can change the outcome. In these results, we observe that IG can be applied to both sequenced and nonsequenced data, with varying results. We can see that the gradient baseline does not affect the results of IG on models such as RNN, LSTM, and GRU, where the data contains time series, as much as it does for models like MLP with nonsequenced data. To confirm this, we also applied IG to SVM models, which gave the results that the choice of gradient baseline has a significant impact on the results of IG.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:hh-50240 |
Date | January 2022 |
Creators | Karlsson, Nellie, Bengtsson, My |
Publisher | Högskolan i Halmstad, Akademin för informationsteknologi |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0035 seconds