Efficient improper learning for online logistic regression - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Efficient improper learning for online logistic regression

Résumé

We consider the setting of online logistic regression and consider the regret with respect to the 2-ball of radius B. It is known (see [Hazan et al., 2014]) that any proper algorithm which has logarithmic regret in the number of samples (denoted n) necessarily suffers an exponential multiplicative constant in B. In this work, we design an efficient improper algorithm that avoids this exponential constant while preserving a logarithmic regret. Indeed, [Foster et al., 2018] showed that the lower bound does not apply to improper algorithms and proposed a strategy based on exponential weights with prohibitive computational complexity. Our new algorithm based on regularized empirical risk minimization with surrogate losses satisfies a regret scaling as O(B log(Bn)) with a per-round time-complexity of order O(d^2).
Fichier principal
Vignette du fichier
OnlineLogistic.pdf (374.14 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02510505 , version 1 (17-03-2020)
hal-02510505 , version 2 (19-03-2020)
hal-02510505 , version 3 (02-11-2020)

Identifiants

Citer

Rémi Jézéquel, Pierre Gaillard, Alessandro Rudi. Efficient improper learning for online logistic regression. COLT 2020 - 33rd Annual Conference on Learning Theory, Jul 2020, Graz / Virtual, Austria. ⟨hal-02510505v3⟩
2400 Consultations
239 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More