On the convergence of mirror descent beyond stochastic convex programming
Résumé
In this paper, we examine a class of nonconvex stochastic opti-
mization problems which we call
variationally coherent
, and which properly
includes all quasi-convex programs. In view of solving such problems, we focus
on the widely used
stochastic mirror descent
(SMD) family of algorithms, and
we establish that the method’s last iterate converges with probability
1
. We
further introduce a localized version of variational coherence which ensures
local convergence of SMD with high probability. These results contribute to
the landscape of nonconvex stochastic optimization by showing that quasicon-
vexity is not essential for convergence: rather, variational coherence, a much
weaker requirement, suffices. Finally, building on the above, we reveal an
interesting insight regarding the convergence speed of SMD: in variationally
coherent problems with sharp minima (e.g. generic linear programs), the last
iterate of SMD reaches an exact global optimum in a finite number of steps
(a.s.), even in the presence of persistent noise. This result is to be contrasted
with existing work on black-box stochastic linear programs which only exhibit
asymptotic convergence rates.
Origine | Fichiers produits par l'(les) auteur(s) |
---|