Limits of XAI application-grounded evaluation: an e-sport prediction example
Résumé
EXplainable AI (XAI) was created to address the issue of Machine Learning's lack of transparency. Its methods are expanding, as are the ways of evaluating them, including human performance-based evaluations of explanations. These evaluations allow us to quantify the contribution of XAI algorithms to human decision-making. This work performs accuracy and response time measurements to evaluate SHAP explanations on an e-sports prediction task. The results of this pilot experiment contradict our intuitions about the beneficial potential of these explanations and allow us to discuss the difficulties of this evaluation methodology.
Fichier principal
Limits_of_XAI_task_performance_evaluation_ECML_PKDD (3).pdf (516.05 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|