Precise Regularized Minimax Regret with Unbounded Weights - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail (Working Paper) Année : 2024

Precise Regularized Minimax Regret with Unbounded Weights

Résumé

In online learning a learner receives data in rounds and at each round predicts a label which is then compared to the true binary label incurring a loss. The total loss over $T$ rounds, when compared to a loss over the best expert from a class of experts/forecasters, is called the regret. In this paper we focus on logarithmic loss for the logistic function with unbounded $d$-dimensional weights, a scenario that was largely unexplored. We introduce a regularized version of the average (fixed design) minimax regret by imposing a \emph{soft-constraint} on the weight norm via precise analysis of the so-called Shtarkov sum. Our main results provide the first known \emph{precise} characterization of the Shtarkov sum and consequently the regularized regret with unbounded weights up to second order asymptotics. Notably, unlike the $d/2\log T$ regret growth known only for bounded weights, our result implies that the regularized regret grows no faster than $(1/2+\alpha/4)d\log T$ when the regularization parameter is of order $\Theta(T^{-\alpha})$ for $\alpha\le 1/2$. We accomplish it using tools from analytic combinatorics, e.g., multidimensional Fourier, saddle point method, and Mellin transform.
Fichier principal
Vignette du fichier
colt2024_precise_asymptotics.pdf (511.58 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04596769 , version 1 (31-05-2024)

Identifiants

  • HAL Id : hal-04596769 , version 1

Citer

Michael Drmota, Philippe Jacquet, Changlong Wu, Wojciech Szpankowski. Precise Regularized Minimax Regret with Unbounded Weights. 2024. ⟨hal-04596769⟩
3 Consultations
3 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More