Universal Generalization Guarantees for Wasserstein Distributionally Robust Models - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2024

Universal Generalization Guarantees for Wasserstein Distributionally Robust Models

Tam Le
  • Fonction : Auteur
  • PersonId : 752715
  • IdHAL : tam-le
Jérôme Malick

Résumé

Distributionally robust optimization has emerged as an attractive way to train robust machine learning models, capturing data uncertainty and distribution shifts. Recent statistical analyses have proved that robust models built from Wasserstein ambiguity sets have nice generalization guarantees, breaking the curse of dimensionality. However, these results are obtained in specific cases, at the cost of approximations, or under assumptions difficult to verify in practice. In contrast, we establish, in this article, exact generalization guarantees that cover all practical cases, including any transport cost function and any loss function, potentially non-convex and nonsmooth. For instance, our result applies to deep learning, without requiring restrictive assumptions. We achieve this result through a novel proof technique that combines nonsmooth analysis rationale with classical concentration results. Our approach is general enough to extend to the recent versions of Wasserstein/Sinkhorn distributionally robust problems that involve (double) regularizations.
Fichier principal
Vignette du fichier
preprint.pdf (585.32 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04460543 , version 1 (15-02-2024)
hal-04460543 , version 2 (28-05-2024)
hal-04460543 , version 3 (11-10-2024)

Identifiants

Citer

Tam Le, Jérôme Malick. Universal Generalization Guarantees for Wasserstein Distributionally Robust Models. 2024. ⟨hal-04460543v1⟩
339 Consultations
83 Téléchargements

Altmetric

Partager

More