An Inertial Newton Algorithm for Deep Learning - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2020

An Inertial Newton Algorithm for Deep Learning

Résumé

We introduce a new second-order inertial optimization method for machine learning called INDIAN. It exploits the geometry of the loss function while only requiring stochastic approximations of the function values and the generalized gradients. This makes INDIAN fully implementable and adapted to large-scale optimization problems such as the training of deep neural networks. The algorithm combines both gradient-descent and Newton-like behaviors as well as inertia. We prove the convergence of INDIAN for most deep learning problems. To do so, we provide a well-suited framework to analyze deep learning loss functions involving tame optimization in which we study the continuous dynamical system together with the discrete stochastic approximations. We prove sublinear convergence for the continuous-time differential inclusion which underlies our algorithm. Besides, we also show how standard optimization mini-batch methods applied to nonsmooth nonconvex problems can yield a certain type of spurious stationary points never discussed before. We address this issue by providing a theoretical framework around the new idea of $D$-criticality; we then give a simple asymptotic analysis of INDIAN. Our algorithm allows for using an aggressive learning rate of $o(1/\log k)$. From an empirical viewpoint, we show that INDIAN returns competitive results with respect to state of the art (stochastic gradient descent, ADAGRAD, ADAM) on popular deep learning benchmark problems.
Fichier principal
Vignette du fichier
arxiv.pdf (1.38 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02140748 , version 1 (27-05-2019)
hal-02140748 , version 2 (06-06-2019)
hal-02140748 , version 3 (12-12-2019)
hal-02140748 , version 4 (12-10-2020)
hal-02140748 , version 5 (02-07-2021)
hal-02140748 , version 6 (20-08-2021)

Identifiants

Citer

Camille Castera, Jérôme Bolte, Cédric Févotte, Edouard Pauwels. An Inertial Newton Algorithm for Deep Learning. 2020. ⟨hal-02140748v4⟩
966 Consultations
417 Téléchargements

Altmetric

Partager

More