eval-rationales: An End-to-End Toolkit to Explain and Evaluate Transformers-Based Models - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

eval-rationales: An End-to-End Toolkit to Explain and Evaluate Transformers-Based Models

Résumé

State-of-the-art (SOTA) transformer-based models in the domains of Natural Language Processing (NLP) and Information Retrieval (IR) are often characterized by their opacity in terms of decision-making processes. This limitation has given rise to various techniques for enhancing model interpretability and the emergence of evaluation benchmarks aimed at designing more transparent models. These techniques are primarily focused on developing interpretable models with the explicit aim of shedding light on the rationales behind their predictions. Concurrently, evaluation benchmarks seek to assess the quality of these rationales provided by the models. Despite the availability of numerous resources for using these techniques and benchmarks independently, their seamless integration remains a non-trivial task. In response to this challenge, this work introduces an end-to-end toolkit that integrates the most common techniques and evaluation approaches for interpretability. Our toolkit offers user-friendly resources facilitating fast and robust evaluations.

Fichier principal
Vignette du fichier
978-3-031-56069-9_20.pdf (800.8 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04722291 , version 1 (04-10-2024)

Identifiants

Citer

Khalil Maachou, Jesús Lovón-Melgarejo, Jose G Moreno, Lynda Tamine. eval-rationales: An End-to-End Toolkit to Explain and Evaluate Transformers-Based Models. European Conference on Information Retrieval, Mar 2024, Glasgow, France. pp.212-217, ⟨10.1007/978-3-031-56069-9_20⟩. ⟨hal-04722291⟩
9 Consultations
5 Téléchargements

Altmetric

Partager

More