Stochastic optimization with momentum: convergence, fluctuations, and traps avoidance - Archive ouverte HAL Access content directly
Journal Articles Electronic Journal of Statistics Year : 2021

Stochastic optimization with momentum: convergence, fluctuations, and traps avoidance

Abstract

In this paper, a general stochastic optimization procedure is studied, unifying several variants of the stochastic gradient descent such as, among others, the stochastic heavy ball method, the Stochastic Nesterov Accelerated Gradient algorithm (S-NAG), and the widely used Adam algorithm. The algorithm is seen as a noisy Euler discretization of a non-autonomous ordinary differential equation, recently introduced by Belotto da Silva and Gazeau, which is analyzed in depth. Assuming that the objective function is non-convex and differentiable, the stability and the almost sure convergence of the iterates to the set of critical points are established. A noteworthy special case is the convergence proof of SNAG in a non-convex setting. Under some assumptions, the convergence rate is provided under the form of a Central Limit Theorem. Finally, the non-convergence of the algorithm to undesired critical points, such as local maxima or saddle points, is established. Here, the main ingredient is a new avoidance of traps result for non-autonomous settings, which is of independent interest.
Fichier principal
Vignette du fichier
2012.04002.pdf (607.43 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03860884 , version 1 (18-11-2022)

Identifiers

Cite

Anas Barakat, Pascal Bianchi, Walid Hachem, Sholom Schechtman. Stochastic optimization with momentum: convergence, fluctuations, and traps avoidance. Electronic Journal of Statistics , 2021, 15 (2), pp.3892-3947. ⟨10.1214/21-EJS1880⟩. ⟨hal-03860884⟩
15 View
13 Download

Altmetric

Share

Gmail Facebook X LinkedIn More