LongEval: Longitudinal Evaluation of Model Performance at CLEF 2023 - Archive ouverte HAL
Conference Papers Year : 2023

LongEval: Longitudinal Evaluation of Model Performance at CLEF 2023

LongEval: Evaluation Longitudinals de la Performance des Modèles à CLEF 2023

Iman Bilal
Hsuvas Borkakoty
Jose Camacho-Collados
Romain Deveaud
Alaa El-Ebshihy
Luis Espinosa-Anke
Daniel Loureiro
Harish Tayyar Madabushi
Arkaitz Zubiaga

Abstract

In this paper, we describe the plans for the first LongEval CLEF 2023 shared task dedicated to evaluating the temporal persistence of Information Retrieval (IR) systems and Text Classifiers. The task is motivated by recent research showing that the performance of these models drops as the test data becomes more distant, with respect to time, from the training data. LongEval differs from traditional shared IR and classification tasks by giving special consideration to evaluating models aiming to mitigate performance drop over time. We envisage that this task will draw attention from the IR community and NLP researchers to the problem of temporal persistence of models, what enables or prevents it, potential solutions and their limitations.
No file

Dates and versions

hal-04060056 , version 1 (05-04-2023)

Identifiers

Cite

Rabab Alkhalifa, Iman Bilal, Hsuvas Borkakoty, Jose Camacho-Collados, Romain Deveaud, et al.. LongEval: Longitudinal Evaluation of Model Performance at CLEF 2023. European Conference on Information Retrieval, Apr 2023, Dublin, Ireland. pp.499-505, ⟨10.1007/978-3-031-28241-6_58⟩. ⟨hal-04060056⟩
103 View
0 Download

Altmetric

Share

More