Model predictivity assessment: incremental test-set selection and accuracy evaluation - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2022

Model predictivity assessment: incremental test-set selection and accuracy evaluation

Résumé

Unbiased assessment of the predictivity of models learnt by supervised machine-learning methods requires knowledge of the learned function over a reserved test set (not used by the learning algorithm). The quality of the assessment depends, naturally, on the properties of the test set and on the error statistic used to estimate the prediction error. In this work we tackle both issues, proposing a new predictivity criterion that carefully weights the individual observed errors to obtain a global error estimate, and using incremental experimental design methods to "optimally" select the test points on which the criterion is computed. Several incremental constructions are studied, including greedy-packing (coffee-house design), support points and kernel herding techniques. Our results show that the incremental and weighted versions of the latter two, based on Maximum Mean Discrepancy concepts, yield superior performance. An industrial test case provided by the historical French electricity supplier (EDF) illustrates the practical relevance of the methodology, indicating that it is an efficient alternative to expensive cross-validation techniques.
Fichier principal
Vignette du fichier
FekhariIoossEtal_HAL.pdf (2.04 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03523695 , version 1 (12-01-2022)
hal-03523695 , version 2 (14-05-2022)
hal-03523695 , version 3 (07-07-2022)

Identifiants

  • HAL Id : hal-03523695 , version 2

Citer

Elias Fekhari, Bertrand Iooss, Joseph Muré, Luc Pronzato, Maria Joao Rendas. Model predictivity assessment: incremental test-set selection and accuracy evaluation. 2022. ⟨hal-03523695v2⟩
395 Consultations
219 Téléchargements

Partager

More