Return to search

Human Interpretable Rule Generation from Convolutional Neural Networks Using RICE (Rotation Invariant Contour Extraction)

The advancement in the field of artificial intelligence has been rapid in recent years and has revolutionized various industries. For example, convolutional neural networks (CNNs) perform image classification at a level equivalent to that of humans on many image datasets. These state-of-the-art networks reached unprecedented success using complex architectures with billions of parameters, numerous kernel configurations, weight initialization and regularization methods. This transitioned the models into black-box entities with little to no information on the decision-making process. This lack of transparency in decision making and started raising concerns amongst some sectors of user community such as the sectors, amongst others healthcare, finance and justice. This challenge motivated our research where we successfully produced human interpretable influential features from CNN for image classification and captured the interactions between these features by producing a concise decision tree making accurate classification decisions. The proposed methodology made use of pre-trained VGG16 with finetuning to extract feature maps produced by learnt filters. A decision tree was then induced on these extracted features that captured important interactions between the features. On the CelebA image dataset, we successfully produced human interpretable rules capturing the main facial landmarks responsible for segmenting males from females with the use of a decision tree which achieved 89.57% accuracy, while on the Cats vs Dogs dataset 87.55% accuracy was achieved.

Identiferoai:union.ndltd.org:unt.edu/info:ark/67531/metadc2356140
Date07 1900
CreatorsSharma, Ashwini Kumar
ContributorsFu, Song, Pears, Russel, Ji, Yuede
PublisherUniversity of North Texas
Source SetsUniversity of North Texas
LanguageEnglish
Detected LanguageEnglish
TypeThesis or Dissertation
FormatText
RightsPublic, Sharma, Ashwini Kumar, Copyright, Copyright is held by the author, unless otherwise noted. All rights Reserved.

Page generated in 0.0014 seconds