I'm writing a program that, given an OWL ontology, retrieves all the explanations for a query by using Pellet as reasoner.
To do that the OWLAPI provides a class named HSTExplanationGenerator
that implements the Hitting Set Tree algorithm to find all the explanations.
When I want to create an instance of HSTExplanationGenerator
I should give a class that implements the interface TransactionAwareSingleExpGen
, a class that implements this interface should provide a method to compute an explanation.
Now, OWLAPI provides two classes which implement this interface: BlackBoxExplanation
and GlassBoxExplanation
. I have read the code of the two classes. GlassBoxExplanation
gets the explanation from Pellet, prune it and then converts it into a set of OWLAxiom
. However, I found it hard to understand what BlackBoxExplanation
does. The questions are: which one should I use? Which are the main differences between these two classes?
GlassBoxExplanation
is, as far as I can tell, provided by Pellet, not OWLAPI.
The main difference between a black box explanation and a glass box explanation is that the black box explanation cannot know the reasoner's internals - it is limited to what is available through the OWLReasoner
interface. In this respect, the definition is no different from black box testing and white box testing in software engineering.
That said, you might want to use the owlexplanation project instead. It is based on laconic explanations, which are a more recent development in OWL entailment explanation than what is available in both OWLAPI and (old versions of) Pellet.