Precise Minimax Regret for Logistic Regression with Categorical Feature Values
Résumé
We study logistic regression with binary labels and categorical (discrete) feature values. Our goal is to evaluate precisely the (maximal) minimax regret. We express it as the so called Shtarkov sum known in information theory. To the best of our knowledge such a sum was never computed in the context of logistic regression. To be more precise, the pointwise regret of an online algorithm is defined as the (excess) loss it incurs over some value of a constant comparator (weight vector) that is used for prediction. It depends on the feature values, label sequence, and the learning algorithm (weight vector). In the maximal minimax scenario we seek the best weights for the worst label sequence over all possible learning algorithms/ distributions. Such a regret still depends on the feature values. For the d = O(1) dimensional logistic regression we show that the maximal minimax regret grows as d 2 log(T /2π) + C + O(1/ √ T) where T is the number of rounds of running a training algorithm and C is explicitly computable constant that depends on the feature values. We also extend these results to non-binary labels. The precise maximal minimax regret presented here is the first result of this kind. Our findings are obtained using tools of analytic combinatorics and information theory.
Origine | Fichiers produits par l'(les) auteur(s) |
---|