Return to search

Human Understandable Interpretation of Deep Neural Networks Decisions Using Generative Models

Deep Neural Networks have long been considered black box systems, where their interpretability is a concern when applied in safety critical systems. In this work, a novel approach of interpreting the decisions of DNNs is proposed. The approach depends on exploiting generative models and the interpretability of their latent space. Three methods for ranking features are explored, two of which depend on sensitivity analysis, and the third one depends on Random Forest model. The Random Forest model was the most successful to rank the features, given its accuracy and inherent interpretability.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:hh-41035
Date January 2019
CreatorsAlabdallah, Abdallah
PublisherHögskolan i Halmstad, Halmstad Embedded and Intelligent Systems Research (EIS)
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0024 seconds