Impact of Injecting Ground Truth Explanations on Relational Graph Convolutional Networks and their Explanation Methods for Link Prediction on Knowledge Graphs
Résumé
Relational Graph Convolutional Networks (RGCNs) are commonly applied to Knowledge Graphs (KGs) for black box link prediction. Several algorithms, or explanations methods, have been proposed to explain the predictions of this model. Recently, researchers have constructed datasets with ground truth explanations for quantitative and qualitative evaluation of predicted explanations. Benchmark results showed state-of-theart explanation methods had difficulties predicting explanations. In this work, we leverage prior knowledge to further constrain the loss function of RGCNs, by penalizing node embeddings far away from the node embeddings in their associated ground truth explanation. Empirical results show improved explanation prediction performance of state-of-the-art post hoc explanations methods for RGCNs, at the cost of predictive performance. Additionally, we quantify the different types of errors made both in terms of data and semantics.
Origine | Fichiers produits par l'(les) auteur(s) |
---|