Predicting Retrieval Performance Changes in Evolving Evaluation Environments - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Predicting Retrieval Performance Changes in Evolving Evaluation Environments

Résumé

Information retrieval (IR) systems evaluation aims at comparing IR systems either (1) one to another with respect to a single test collection, and (2) across multiple collections. In the first case, the evaluation environment (test collection and evaluation metrics) stays the same, while the environment changes, in the second case. Different evaluation environments may be seen, in fact, as evolutionary versions of some given evaluation environment. In this work, we propose a methodology to predict the statistically significant change in the performance of an IR system (i.e. result delta ) by quantifying the differences between test collections (i.e. knowledge delta ). In a first phase, we quantify differences between document collections (i.e. ) in the test collections by means of TF-IDF and Language Models (LM) representations. We use the to train SVM classification models to predict the significantly performance changes of various IR systems using evolving test collections derived from the Robust and TREC-COVID collections. We evaluate our approach against our previous experiments.
Fichier non déposé

Dates et versions

hal-04288398 , version 1 (16-11-2023)

Licence

Paternité

Identifiants

Citer

Alaa El-Ebshihy, Tobias Fink, Gabriela Gonzalez-Saez, Petra Galuščáková, Florina Piroi, et al.. Predicting Retrieval Performance Changes in Evolving Evaluation Environments. 14th International Conference of the {CLEF} Association, Sep 2023, Thessaloniki, Greece. pp.21-33, ⟨10.1007/978-3-031-42448-9_3⟩. ⟨hal-04288398⟩
15 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More