Exploring Precision and Recall to assess the quality and diversity of LLMs - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

Exploring Precision and Recall to assess the quality and diversity of LLMs

Résumé

This paper introduces a novel evaluation framework for Large Language Models (LLMs) such as Llama-2 and Mistral, focusing on the adaptation of Precision and Recall metrics from image generation to text generation. This approach allows for a nuanced assessment of the quality and diversity of generated text without the need for aligned corpora. By conducting a comprehensive evaluation of state-of-the-art language models, the study reveals significant insights into their performance on open-ended generation tasks, which are not adequately captured by traditional benchmarks. The findings highlight a trade-off between the quality and diversity of generated samples, particularly when models are fine-tuned with human feedback. This work extends the toolkit for distribution-based NLP evaluation, offering insights into the practical capabilities and challenges faced by current LLMs in generating diverse and high-quality text.
Fichier principal
Vignette du fichier
2402.10693v2.pdf (3.11 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04584210 , version 1 (23-05-2024)

Identifiants

Citer

Florian Le Bronnec, Alexandre Vérine, Benjamin Negrevergne, Yann Chevaleyre, Alexandre Allauzen. Exploring Precision and Recall to assess the quality and diversity of LLMs. 62nd Annual Meeting of the Association for Computational Linguistics, Aug 2024, Bangkok, Thailand. ⟨hal-04584210⟩
284 Consultations
141 Téléchargements

Altmetric

Partager

More