Aggregated Hold-Out - Archive ouverte HAL
Article Dans Une Revue Journal of Machine Learning Research Année : 2021

Aggregated Hold-Out

Résumé

Aggregated hold-out (Agghoo) is a method which averages learning rules selected by hold-out (that is, cross-validation with a single split). We provide the first theoretical guarantees on Agghoo, ensuring that it can be used safely: Agghoo performs at worst like the hold-out when the risk is convex. The same holds true in classification with the 0-1 risk, with an additional constant factor. For the hold-out, oracle inequalities are known for bounded losses, as in binary classification. We show that similar results can be proved, under appropriate assumptions, for other risk-minimization problems. In particular, we obtain an oracle inequality for regularized kernel regression with a Lip-schitz loss, without requiring that the Y variable or the regressors be bounded. Numerical experiments show that aggregation brings a significant improvement over the hold-out and that Agghoo is competitive with cross-validation.
Fichier principal
Vignette du fichier
agghoo_rkhs.pdf (539.88 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02273193 , version 1 (09-09-2019)

Identifiants

Citer

Guillaume Maillard, Sylvain Arlot, Matthieu Lerasle. Aggregated Hold-Out. Journal of Machine Learning Research, 2021, 22 (20), pp.1--55. ⟨10.48550/arXiv.1909.04890⟩. ⟨hal-02273193⟩
153 Consultations
85 Téléchargements

Altmetric

Partager

More