Towards counterfactual explanations for ontologies - Archive ouverte HAL
Article Dans Une Revue Semantic Web – Interoperability, Usability, Applicability Année : 2024

Towards counterfactual explanations for ontologies

Résumé

Debugging and repairing Web Ontology Language (OWL) ontologies has been a key field of research since OWL became a W3C recommendation. One way to understand errors and fix them is through explanations. These explanations are usually extracted from the reasoner and displayed to the ontology authors as is. In the meantime, there has been a recent call in the eXplainable AI (XAI) field to use expert knowledge in the form of knowledge graphs and ontologies. In this paper, a parallel between explanations for machine learning and for ontologies is drawn. This link enables the adaptation of XAI methods to explain ontologies and their entailments. Counterfactual explanations have been identified as a good candidate to solve the explainability problem in machine learning. The CEO (Counterfactual Explanations for Ontologies) method is thus proposed to explain inconsistent ontologies using counterfactual explanations. A preliminary user study is conducted to ensure that using XAI methods for ontologies is relevant and worth pursuing.
Fichier principal
Vignette du fichier
swj3566.pdf (469.99 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04813004 , version 1 (01-12-2024)

Identifiants

  • HAL Id : hal-04813004 , version 1

Citer

Matthieu Bellucci, Nicolas Delestre, Nicolas Malandain, Cecilia Zanni-Merk. Towards counterfactual explanations for ontologies. Semantic Web – Interoperability, Usability, Applicability, 2024, Special Issue on Interactive Semantic Web, 15 (5). ⟨hal-04813004⟩
0 Consultations
0 Téléchargements

Partager

More