Precise Regularized Minimax Regret with Unbounded Weights
Résumé
In online learning a learner receives data in rounds and
at each round predicts a label which is then compared to the true binary
label incurring a loss. The total loss over $T$ rounds, when compared to
a loss over the best expert from a class of experts/forecasters,
is called the regret.
In this paper we focus on logarithmic loss for the logistic function
with unbounded $d$-dimensional weights, a scenario that was largely unexplored.
We introduce a regularized version of the average (fixed design) minimax regret by imposing a \emph{soft-constraint} on the weight norm via precise analysis of the so-called Shtarkov sum. Our main results provide the first known \emph{precise} characterization of the Shtarkov sum and consequently the regularized regret with unbounded weights up to second order asymptotics.
Notably, unlike the $d/2\log T$ regret growth known only for
bounded weights,
our result implies that the regularized regret grows no faster than
$(1/2+\alpha/4)d\log T$ when the regularization parameter is of order
$\Theta(T^{-\alpha})$ for $\alpha\le 1/2$.
We accomplish it using tools from analytic combinatorics, e.g.,
multidimensional Fourier,
saddle point method, and Mellin transform.
Domaines
Machine Learning [stat.ML]Origine | Fichiers produits par l'(les) auteur(s) |
---|