<div>The widespread and growing usage of machine learning models, especially in highly critical areas such as law, predicate the need for interpretable models. Models that cannot be audited are vulnerable to inheriting biases from the dataset. Even locally interpretable models are vulnerable to adversarial attack. To address this issue a new methodology is proposed to translate any existing machine learning model into a globally interpretable one.</div><div>This methodology, MTRE-PAN, is designed as a hybrid SVM-decision tree model and leverages the interpretability of linear hyperplanes. MTRE-PAN uses this hybrid model to create polygons that act as intermediates for the decision boundary. MTRE-PAN is compared to a previously proposed model, TRE-PAN, on three non-synthetic datasets: Abalone, Census and Diabetes data. TRE-PAN translates a machine learning model to a 2-3 decision tree in</div><div>order to provide global interpretability for the target model. The datasets are each used to train a Neural Network that represents the non-interpretable model. For all target models, the results show that MTRE-PAN generates interpretable decision trees that have a lower</div><div>number of leaves and higher parity compared to TRE-PAN.</div>
Identifer | oai:union.ndltd.org:purdue.edu/oai:figshare.com:article/17126720 |
Date | 07 January 2022 |
Creators | Mohammad Naser Al-Merri (11794466) |
Source Sets | Purdue University |
Detected Language | English |
Type | Text, Thesis |
Rights | CC BY 4.0 |
Relation | https://figshare.com/articles/thesis/GLOBAL_TRANSLATION_OF_MACHINE_LEARNING_MODELS_TO_INTERPRETABLE_MODELS/17126720 |
Page generated in 0.0023 seconds