LongEval: Longitudinal Evaluation of Model Performance at CLEF 2024 - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

LongEval: Longitudinal Evaluation of Model Performance at CLEF 2024

Hsuvas Borkakoty
Romain Deveaud
Luis Espinosa-Anke
Tobias Fink
Petra Galuščáková
David Iommi
Harish Tayyar Madabushi
Pablo Medina-Alias
Philippe Mulhem
Arkaitz Zubiaga

Résumé

This paper introduces the planned second LongEval Lab, part of the CLEF 2024 conference. The aim of the lab's two tasks is to give researchers test data for addressing temporal effectiveness persistence challenges in both information retrieval and text classification, motivated by the fact that model performance degrades as the test data becomes temporally distant from the training data. LongEval distinguishes itself from traditional IR and classification tasks by emphasizing the evaluation of models designed to mitigate performance drop over time using evolving data. The second LongEval edition will further engage the IR community and NLP researchers in addressing the crucial challenge of temporal persistence in models, exploring the factors that enable or hinder it, and identifying potential solutions along with their limitations.
Fichier principal
Vignette du fichier
LongEval_ECIR_2024-2.pdf (200.68 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04577466 , version 1 (08-11-2024)

Licence

Copyright (Tous droits réservés)

Identifiants

Citer

Rabab Alkhalifa, Hsuvas Borkakoty, Romain Deveaud, Alaa El-Ebshihy, Luis Espinosa-Anke, et al.. LongEval: Longitudinal Evaluation of Model Performance at CLEF 2024. Advances In Information Retrieval (ECIR 2024), Mar 2024, Glasgow (Ecosse), United Kingdom. pp.60-66, ⟨10.1007/978-3-031-56072-9_8⟩. ⟨hal-04577466⟩
27 Consultations
0 Téléchargements

Altmetric

Partager

More