On the asymptotic rate of convergence of Stochastic Newton algorithms and their Weighted Averaged versions - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2021

On the asymptotic rate of convergence of Stochastic Newton algorithms and their Weighted Averaged versions

Résumé

The majority of machine learning methods can be regarded as the minimization of an unavailable risk function. To optimize the latter, given samples provided in a streaming fashion, we define a general stochastic Newton algorithm and its weighted average version. In several use cases, both implementations will be shown not to require the inversion of a Hessian estimate at each iteration, but a direct update of the estimate of the inverse Hessian instead will be favored. This generalizes a trick introduced in [2] for the specific case of logistic regression, by directly updating the estimate of the inverse Hessian. Under mild assumptions such as local strong convexity at the optimum, we establish almost sure convergences and rates of convergence of the algorithms, as well as central limit theorems for the constructed parameter estimates. The unified framework considered in this paper covers the case of linear, logistic or softmax regressions to name a few. Numerical experiments on simulated data give the empirical evidence of the pertinence of the proposed methods, which outperform popular competitors particularly in case of bad initializa-tions.
Fichier principal
Vignette du fichier
BGB2020.pdf (3 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03008212 , version 1 (17-11-2020)
hal-03008212 , version 2 (09-01-2021)
hal-03008212 , version 3 (05-12-2022)
hal-03008212 , version 4 (27-06-2023)
hal-03008212 , version 5 (28-06-2023)

Identifiants

Citer

Claire Boyer, Antoine Godichon-Baggioni. On the asymptotic rate of convergence of Stochastic Newton algorithms and their Weighted Averaged versions. 2021. ⟨hal-03008212v2⟩
844 Consultations
200 Téléchargements

Altmetric

Partager

More