What are the best systems? New perspectives on NLP Benchmarking
Résumé
In Machine Learning, a benchmark refers to an ensemble of datasets associated
with one or multiple metrics together with a way to aggregate different systems
performances. They are instrumental in (i) assessing the progress of new methods
along different axes and (ii) selecting the best systems for practical use. This is
particularly the case for NLP with the development of large pre-trained models
(e.g. GPT, BERT) that are expected to generalize well on a variety of tasks. While
the community mainly focused on developing new datasets and metrics, there has
been little interest in the aggregation procedure, which is often reduced to a simple
average over various performance measures. However, this procedure can be
problematic when the metrics are on a different scale, which may lead to spurious
conclusions. This paper proposes a new procedure to rank systems based on their
performance across different tasks. Motivated by the social choice theory, the final
system ordering is obtained through aggregating the rankings induced by each task
and is theoretically grounded. We conduct extensive numerical experiments (on
over 270k scores) to assess the soundness of our approach both on synthetic and
real scores (e.g. GLUE, EXTREM, SEVAL, TAC, FLICKR). In particular, we
show that our method yields different conclusions on state-of-the-art systems than
the mean-aggregation procedure while being both more reliable and robust
Domaines
Calcul [stat.CO]
Fichier principal
NeurIPS-2022-what-are-the-best-systems-new-perspectives-on-nlp-benchmarking-Paper-Conference.pdf (728.89 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|