Return to search

Explanations In Contextual Graphs: A Solution To Accountability In Knowledge Based Systems

In order for intelligent systems to be a viable and utilized tool, a user must be able to understand how the system comes to a decision. Without understanding how the system arrived at an answer, a user will be less likely to trust its decision. One way to increase a user's understanding of how the system functions is by employing explanations to account for the output produced. There have been attempts to explain intelligent systems over the past three decades. However, each attempt has had shortcomings that separated the logic used to produce the output and that used to produce the explanation. By using the representational paradigm of Contextual Graphs, it is proposed that explanations can be produced to overcome these shortcomings. Two different temporal forms of explanations are proposed, a pre-explanation and a post-explanation. The pre-explanation is intended to help the user understand the decision making process. The post-explanation is intended to help the user understand how the system arrived at a final decision. Both explanations are intended to help the user gain a greater understanding of the logic used to compute the system's output, and thereby enhance the system's credibility and utility. A prototype system is constructed to be used as a decision support tool in a National Science Foundation research program. The researcher has spent the last year at the NSF collecting the knowledge implemented in the prototype system.

Identiferoai:union.ndltd.org:ucf.edu/oai:stars.library.ucf.edu:etd-1501
Date01 January 2005
CreatorsSherwell, Brian W
PublisherSTARS
Source SetsUniversity of Central Florida
LanguageEnglish
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceElectronic Theses and Dissertations

Page generated in 0.0025 seconds