An Evaluation Framework for Attributed Information Retrieval using Large Language Models - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

An Evaluation Framework for Attributed Information Retrieval using Large Language Models

Résumé

With the growing success of Large Language models (LLMs) in information-seeking scenarios, search engines are now adopting generative approaches to provide answers along with in-line citations as attribution. While existing work focuses mainly on attributed question answering, in this paper, we target informationseeking scenarios which are often more challenging due to the open-ended nature of the queries and the size of the label space in terms of the diversity of candidate-attributed answers per query. We propose a reproducible framework to evaluate and benchmark attributed information seeking, using any backbone LLM, and different architectural designs: (1) Generate (2) Retrieve then Generate, and (3) Generate then Retrieve. Experiments using HAGRID, an attributed information-seeking dataset, show the impact of different scenarios on both the correctness and attributability of answers.
Fichier principal
Vignette du fichier
CiKM_resource_paper_updated.pdf (729.29 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04720446 , version 1 (03-10-2024)

Licence

Identifiants

Citer

Hanane Djeddal, Pierre Erbacher, Raouf Toukal, Laure Soulier, Karen Pinel-Sauvagnat, et al.. An Evaluation Framework for Attributed Information Retrieval using Large Language Models. CiKM'24 - 33rd ACM International Conference on Information and Knowledge Management, ACM, Oct 2024, Boise, Idaho, United States. ⟨10.1145/3627673.3679172⟩. ⟨hal-04720446⟩
37 Consultations
1 Téléchargements

Altmetric

Partager

More