Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Streaming Data
Résumé
We consider the stochastic approximation problem in a streaming framework where an objective is minimized through unbiased estimates of its gradients. In this streaming framework, we consider time-varying data streams that must be processed sequentially. Our methods are Stochastic Gradient (SG) based due to their applicability and computational advantages. We provide a non-asymptotic analysis of the convergence of various SG-based methods; this includes the famous SG descent (a.k.a. Robbins-Monro algorithm), constant and time-varying mini-batch SG methods, and their averaged estimates (a.k.a. Polyak-Ruppert averaging). Our analysis suggests choosing the learning rate according to the expected data streams, which can speed up the convergence. In addition, we show how the averaged estimate can achieve optimal convergence in terms of attaining Cramer-Rao's lower bound while being robust to any data stream rate. In particular, our analysis shows how Polyak-Ruppert averaging of time-varying mini-batches can provide variance reduction and accelerate convergence simultaneously, which is advantageous for large-scale learning problems. These theoretical results are illustrated for various data streams, showing the effectiveness of the proposed algorithms.
Origine | Fichiers produits par l'(les) auteur(s) |
---|