teex: A toolbox for the evaluation of explanations
Résumé
We present teex, a Python toolbox for the evaluation of explanations. teex focuses on the evaluation of local explanations of the predictions of machine learning models by comparing them to ground-truth explanations. It supports several types of explanations: feature importance vectors, saliency maps, decision rules, and word importance maps. A collection of evaluation metrics is provided for each type. Real-world datasets and generators of synthetic data with ground-truth explanations are also contained within the library. teex contributes to research on explainable AI by providing tested, streamlined, user-friendly tools to compute quality metrics for the evaluation of explanation methods. Source code and a basic overview can be found at github.com/chus-chus/teex, and tutorials and full API documentation are at teex.readthedocs.io.
Domaines
Intelligence artificielle [cs.AI]Origine | Publication financée par une institution |
---|