Predicting Retrieval Performance Changes in Evolving Evaluation Environments
Résumé
Information retrieval (IR) systems evaluation aims at comparing IR systems either (1) one to another with respect to a single test collection, and (2) across multiple collections. In the first case, the evaluation environment (test collection and evaluation metrics) stays the same, while the environment changes, in the second case. Different evaluation environments may be seen, in fact, as evolutionary versions of some given evaluation environment. In this work, we propose a methodology to predict the statistically significant change in the performance of an IR system (i.e. result delta ) by quantifying the differences between test collections (i.e. knowledge delta ). In a first phase, we quantify differences between document collections (i.e. ) in the test collections by means of TF-IDF and Language Models (LM) representations. We use the to train SVM classification models to predict the significantly performance changes of various IR systems using evolving test collections derived from the Robust and TREC-COVID collections. We evaluate our approach against our previous experiments.