Building an error-free and high-quality ontology in OWL (Web Ontology Language)---the latest standard ontology language endorsed by the World Wide Web Consortium---is not an easy task for domain experts, who usually have limited knowledge of OWL and logic. One sign of an erroneous ontology is the occurrence of undesired inferences (or entailments), often caused by interactions among (apparently innocuous) axioms within the ontology. This suggests the need for a tool that allows developers to inspect why such an entailment follows from the ontology in order to debug and repair it. This thesis aims to address the above problem by advancing knowledge and techniques in generating explanations for entailments in OWL ontologies. We build on earlier work on identifying minimal subsets of the ontology from which an entailment can be drawn---known technically as justifications. Our main focus is on planning (at a logical level) an explanation that links a justification (premises) to its entailment (conclusion); we also consider how best to express the explanation in English. Among other innovations, we propose a method for assessing the understandability of explanations, so that the easiest can be selected from a set of alternatives. Our findings make a theoretical contribution to Natural Language Generation and Knowledge Representation. They could also play a practical role in improving the explanation facilities in ontology development tools, considering especially the requirements of users who are not expert in OWL.
|Source Sets||Ethos UK|
|Type||Electronic Thesis or Dissertation|
Page generated in 0.0342 seconds