Superquantile-based learning: a direct approach using gradient-based optimization. - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Journal of Signal Processing Systems Année : 2022

Superquantile-based learning: a direct approach using gradient-based optimization.

Résumé

We consider a formulation of supervised learning that endows models with robustness to distributional shifts from training to testing. The formulation hinges upon the superquantile risk measure, also known as the conditional value-at-risk, which has shown promise in recent applications of machine learning and signal processing. We show that, thanks to a direct smoothing of the superquantile function, a superquantile-based learning objective is amenable to gradient-based optimization, using batch optimization algorithms such as gradient descent or quasi-Newton algorithms, or using stochastic optimization algorithms such as stochastic gradient algorithms. A companion software SPQR implements in Python the algorithms described and allows practitioners to experiment with superquantile-based supervised learning.
Fichier principal
Vignette du fichier
main.pdf (625.31 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03505745 , version 1 (31-12-2021)

Identifiants

Citer

Yassine Laguel, Jérôme Malick, Zaid Harchaoui. Superquantile-based learning: a direct approach using gradient-based optimization.. Journal of Signal Processing Systems, 2022. ⟨hal-03505745⟩

Collections

UGA CNRS TDS-MACS
29 Consultations
52 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More