Local linear convergence of proximal coordinate descent algorithm - Archive ouverte HAL
Article Dans Une Revue Optimization Letters Année : 2024

Local linear convergence of proximal coordinate descent algorithm

Résumé

For composite nonsmooth optimization problems, which are "regular enough", proximal gradient descent achieves model identification after a finite number of iterations. For instance, for the Lasso, this implies that the iterates of proximal gradient descent identify the non-zeros coefficients after a finite number of steps. The identification property has been shown for various optimization algorithms, such as accelerated gradient descent, Douglas-Rachford or variance-reduced algorithms, however, results concerning coordinate descent are scarcer. Identification properties often rely on the framework of "partial smoothness", which is a powerful but technical tool. In this work, we show that partial smooth functions have a simple characterization when the nonsmooth penalty is separable. In this simplified framework, we prove that cyclic coordinate descent achieves model identification in finite time, which leads to explicit local linear convergence rates for coordinate descent. Extensive experiments on various estimators and on real datasets demonstrate that these rates match well empirical results.
Fichier non déposé

Dates et versions

hal-04308828 , version 1 (27-11-2023)

Identifiants

Citer

Quentin Klopfenstein, Quentin Bertrand, Alexandre Gramfort, Joseph Salmon, S. Vaiter. Local linear convergence of proximal coordinate descent algorithm. Optimization Letters, 2024, 18, pp.135-154. ⟨10.1007/s11590-023-01976-z⟩. ⟨hal-04308828⟩
55 Consultations
0 Téléchargements

Altmetric

Partager

More