Return to search

Learning explainable concepts in the presence of a qualitative model.

This thesis addresses the problem of learning concept descriptions that are interpretable, or explainable. Explainability is understood as the ability to justify the learned concept in terms of the existing background knowledge. The starting point for the work was an existing system that would induce only fully explainable rules. The system performed well when the model used during induction was complete and correct. In practice, however, models are likely to be imperfect, i.e. incomplete and incorrect. We report here a new approach that achieves explainability with imperfect models. The basis of the system is the standard inductive search driven by an accuracy-oriented heuristic, biased towards rule explainability. The bias is abandoned when there is heuristic evidence that a significant loss of accuracy results from constraining the search to explainable rules only. The users can express their relative preference for accuracy vs. explainability. Experiments with the system indicate that, even with a partially incomplete and/or incorrect model, insisting on explainability results in only a small loss of accuracy. We also show how the new approach described can repair a faulty model using evidence derived from data during induction.

Identiferoai:union.ndltd.org:uottawa.ca/oai:ruor.uottawa.ca:10393/9762
Date January 1995
CreatorsRouget, Thierry.
ContributorsMatwin, Stan,
PublisherUniversity of Ottawa (Canada)
Source SetsUniversité d’Ottawa
Detected LanguageEnglish
TypeThesis
Format131 p.

Page generated in 0.0187 seconds