Explaining Entailments in OWL Ontologies
Building error-free and high-quality domain ontologies in OWL (Web Ontology Language) - the latest standard ontology language endorsed by the World Wide Web Consortium - is
not an easy task for domain experts, who usually have limited knowledge of OWL and logic. One sign of an erroneous ontology is the occurrence of undesired inferences (or entailments), often caused by interactions among (apparently innocuous) axioms within the ontology. This suggests the need for a tool that allows ontology developers to inspect why such an entailment follows from the ontology in order to debug and repair the ontology.
This PhD project aims to address the above problem by developing a Natural Language Generation system which is capable of generating accessible explanations, in English, of why an entailment follows from an OWL ontology. Justifications for entailments, which are minimal subsets of the ontology from which an entailment can be drawn, are adopted as the basis for generating such explanations. The focus of this thesis is on issues of planning for the content of an explanation and how to explain OWL inferences in English. Part of the novelty of this thesis is the assessment of understandability of inferences in OWL - both simple and complex inferences - in order to enable the selection of the easiest explanation for an entailment among alternatives.
The project findings should be of interest to researchers in the areas of Natural Language Generation and Knowledge Representation, and developers of ontology viewing and editing tools and automated reasoners for OWL, who wish to integrate an explanation facility in their systems in order to support their users, especially non-expert users.
Click here for project publications.