Sparse Accelerated Exponential Weights
Résumé
We consider the stochastic optimization problem where a convex function is minimized observing recursively the gradients. We introduce SAEW, a new procedure that accelerates exponential weights procedures with the slow rate 1/ √ T to procedures achieving the fast rate 1/T. Under the strong convexity of the risk, we achieve the optimal rate of convergence for approximating sparse parameters in R^d. The acceleration is achieved by using successive averaging steps in an online fashion. The procedure also produces sparse estimators thanks to additional hard threshold steps.
Origine | Fichiers produits par l'(les) auteur(s) |
---|