Fast Incremental Expectation Maximization for non-convex finite-sum optimization: non asymptotic convergence bounds *
Résumé
Fast Incremental Expectation Maximization was introduced to design Expectation-Maximization (EM) for the large scale learning framework involving finite-sum and possibly non-convex optimization. In this paper, we first recast this iterative algorithm and other incremental EM type algorithms in the Stochastic Approximation within EM framework. Then, we provide non asymptotic convergence bounds as a function of the number of examples n and of the maximal number of iterations K max. We propose two strategies for achieving an-approximate stationary point: either with K max = O(n 2/3 −1) or with K max = O(√ n −3/2), both strategies relying on a random termination rule before K max and on a constant step size in the Stochastic Approximation step. Our bounds are explicit and improve over previous results. We provide a complexity bound which scales as √ n which improves over the bounds obtained so far; it is at the cost of a larger dependence upon the tolerance thus making this control relevant for small to medium accuracy with respect to the number of examples n. For the n 2/3-rate, our bounds show a numerical improvement thanks to a tighter definition of crucial quantities playing a role in the efficiency of the algorithm. * This work is partially supported by the Fondation Simone et Cino Del Duca through the project OpSiMorE, and by the French Agence Nationale de la Recherche (ANR), project under reference ANR-PRC-CE23 MASDOL. 1
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...