Spelling suggestions: "subject:"entegrated gradients"" "subject:"entegrated pradients""
1 |
Explainable Antibiotics Prescriptions in NLP with Transformer ModelsContreras Zaragoza, Omar Emilio January 2021 (has links)
The overprescription of antibiotics has resulted in bacteria resistance, which is considered a global threat to global health. Deciding if antibiotics should be prescribed or not from individual visits of patients’ medical records in Swedish can be considered a text classification task, one of the applications of Natural Language Processing (NLP). However, medical experts and patients can not trust a model if explanations for its decision are not provided. In this work, multilingual and monolingual Transformer models are evaluated for the medical classification task. Furthermore, local explanations are obtained with SHapley Additive exPlanations and Integrated Gradients to compare the models’ predictions and evaluate the explainability methods. Finally, the local explanations are also aggregated to obtain global explanations and understand the features that contributed the most to the prediction of each class. / Felaktig utskrivning av antibiotika har resulterat i ökad antibiotikaresistens, vilket anses vara ett globalt hot mot global hälsa. Att avgöra om antibiotika ska ordineras eller inte från patientjournaler på svenska kan betraktas som ett textklassificeringproblem, en av tillämpningarna av Natural Language Processing (NLP). Men medicinska experter och patienter kan inte lita på en modell om förklaringar till modellens beslut inte ges. I detta arbete utvärderades flerspråkiga och enspråkiga Transformersmodeller för medisinska textklassificeringproblemet. Dessutom erhölls lokala förklaringar med SHapley Additive exPlanations och Integrated gradients för att jämföra modellernas förutsägelser och utvärdera metodernas förklarbarhet. Slutligen aggregerades de lokala förklaringarna för att få globala förklaringar och förstå de ord som bidrog mest till modellens förutsägelse för varje klass.
|
2 |
Explainable AI For Predictive MaintenanceKarlsson, Nellie, Bengtsson, My January 2022 (has links)
As the complexity of deep learning model increases, the transparency of the systems does the opposite. It may be hard to understand the predictions a deep learning model makes, but even harder to understand why these predictions are made. Using eXplainable AI (XAI), we can gain greater knowledge of how the model operates and how the input in which the model receives can change its predictions. In this thesis, we apply Integrated Gradients (IG), an XAI method primarily used on image data and on datasets containing tabular and time-series data. We also evaluate how the results of IG differ from various types of models and how the change of baseline can change the outcome. In these results, we observe that IG can be applied to both sequenced and nonsequenced data, with varying results. We can see that the gradient baseline does not affect the results of IG on models such as RNN, LSTM, and GRU, where the data contains time series, as much as it does for models like MLP with nonsequenced data. To confirm this, we also applied IG to SVM models, which gave the results that the choice of gradient baseline has a significant impact on the results of IG.
|
Page generated in 0.0718 seconds