OPTIMAL NON-ASYMPTOTIC BOUND OF THE RUPPERT-POLYAK AVERAGING WITHOUT STRONG CONVEXITY
Abstract
This paper is devoted to the non-asymptotic control of the mean-squared error for the Ruppert-Polyak stochastic averaged gradient descent introduced in the seminal contributions of [Rup88] and [PJ92]. In our main results, we establish non-asymptotic tight bounds (optimal with respect to the Cramer-Rao lower bound) in a very general framework that includes the uniformly strongly convex case as well as the one where the function f to be minimized satisfies a weaker Kurdyka-Lojiasewicz-type condition [Loj63, Kur98]. In particular, it makes it possible to recover some pathological examples such as on-line learning for logistic regression (see [Bac14]) and recursive quan-tile estimation (an even non-convex situation). Finally, our bound is optimal when the decreasing step (γn) n≥1 satisfies: γn = γn −β with β = 3/4, leading to a second-order term in O(n −5/4).
Origin : Files produced by the author(s)
Loading...