Minimax Regret with Unbounded Weights
Résumé
In online learning, a learner receives data in rounds $1< t < T$ and
at each round predicts a label which is then compared to the true
label resulting in a loss. The total loss over $T$ rounds, when compared to
a loss over the best expert from a class of experts,
is called the regret.
This paper focuses on logarithmic loss over a class of experts $\CH_{p,\bw}$, represented by a probability distribution $p$ and parameterized by a $d$-dimensional weight vector $\bw$.
Unlike previous work that studied bounded weights, we assume that the norm of the weights can be \emph{unbounded}.
This unboundedness poses a challenging problem that leads to unexpected results.
For such a class of weighted experts we analyze
the (fixed design) minimax regret for the
best predictor and worst label sequence.
Such a minimax regret turns out to be a universal lower bound for
most regrets analyzed in the literature.
For bounded weights it is known that the minimax regret can grow like
$(d/2) \log(T R^2/d)$ where $R$ is an upper bound on the weight norm.
In contrast, we show in this paper that for unbounded norm with $R=\infty$ the minimax regret
is asymptotically $(d-1)\log (T/d)$ for a logistic-like expert class
which we also extend to $R=\Omega(\sqrt{T})$.
We prove our findings by introducing the so called splittable label sequences that
partition the weight space into $T^{d-1}$ regions with maximum
sequence probability equal to $1$.
Finally, for a general class of monotone experts we present an upper bound
$2 d\log T$ for the regret.
Origine | Fichiers produits par l'(les) auteur(s) |
---|