Return to search

Explainable ML for drug prediction

Cancer may be treated with personalized medicine, meaning that specific patientsmight respond better to specific treatments instead of having a common treatment. TheReference Drug-based Neural Network (RefDNN) predicts whether a particular cancer cellline will resist a determined drug, but it fails to provide an explanations for this prediction.The thesis objective is to research on explainable machine learning methods to extractrule-based explanations from the RefDNN predictions and conclude on how confident wecan be about these explanations and whether they make sense from a biological point ofview. One of such explainable machine learning methods is Local Rule-based Explanation(LORE), which extracts rule-based explanations from any black box model using localdecision trees. In this thesis LORE is applied to explain the predictions of the RefDNNon a drug sensitivity dataset and three experiments are set up. First experiment tests theaccuracy and general performance of the extracted rule-based explanations. Second experimentstests the robustness of the rule-based explanations. Third experiments checks theglobal fidelity of the local decision trees used by LORE to mimic the RefDNN behaviour.Finally, one rule-based decision is explained from a biological point of view and conclusionsare made on the obtained results.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:liu-205262
Date January 2024
CreatorsDiaz-Roncero Gonzalez, Daniel
PublisherLinköpings universitet, Institutionen för datavetenskap, Linköpings universitet, Filosofiska fakulteten
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.002 seconds