Primal-dual subgradient methods for minimizing uniformly convex functions
Résumé
We discuss non-Euclidean deterministic and stochastic algorithms for optimization problems with strongly and uniformly convex objectives. We provide accuracy bounds for the performance of these algorithms and design methods which are adaptive with respect to the parameters of strong or uniform convexity of the objective: in the case when the total number of iterations $N$ is fixed, their accuracy coincides, up to a logarithmic in $N$ factor with the accuracy of optimal algorithms.
Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|