How to validate average calibration for machine learning regression tasks ? - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2024

How to validate average calibration for machine learning regression tasks ?

Pascal Pernot

Résumé

Average calibration of the uncertainties of machine learning regression tasks can be tested in two ways. One way is to estimate the calibration error (CE) as the difference between the mean absolute error (MSE) and the mean variance (MV) or mean squared uncertainty. The alternative is to compare the mean squared z-scores or scaled errors (ZMS) to 1. Both approaches might lead to different conclusion, as illustrated on an ensemble of datasets from the recent machine learning uncertainty quantification literature. It is shown here that the CE is very sensitive to the distribution of uncertainties, and notably to the presence of outlying uncertainties, and that it cannot be used reliably for calibration testing. By contrast, the ZMS statistic does not present this sensitivity issue and offers the most reliable approach in this context. Implications for the validation of conditional calibration are discussed.

Dates et versions

hal-04465753 , version 1 (19-02-2024)

Identifiants

Citer

Pascal Pernot. How to validate average calibration for machine learning regression tasks ?. 2024. ⟨hal-04465753⟩
6 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More