An Inertial Newton Algorithm for Deep Learning
Résumé
We introduce a new second-order inertial optimization method for machine learning called INDIAN. It exploits the geometry of the loss function while only requiring stochastic approximations of the function values and the generalized gradients. This makes INDIAN fully implementable and adapted to large-scale optimization problems such as the training of deep neural networks. The algorithm combines both gradient-descent and Newton-like behaviors as well as inertia. We prove the convergence of INDIAN for most deep learning problems. To do so, we provide a well-suited framework to analyze deep learning loss functions involving tame optimization in which we study the continuous dynamical system together with the discrete stochastic approximations. We prove sublinear convergence for the continuous-time differential inclusion which underlies our algorithm. Besides, we also show how standard optimization mini-batch methods applied to nonsmooth nonconvex problems can yield a certain type of spurious stationary points never discussed before. We address this issue by providing a theoretical framework around the new idea of $D$-criticality; we then give a simple asymptotic analysis of INDIAN. Our algorithm allows for using an aggressive learning rate of $o(1/\log k)$. From an empirical viewpoint, we show that INDIAN returns competitive results with respect to state of the art (stochastic gradient descent, ADAGRAD, ADAM) on popular deep learning benchmark problems.
Origine | Fichiers produits par l'(les) auteur(s) |
---|