Fast convex optimization via closed-loop time scaling of gradient dynamics - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2023

Fast convex optimization via closed-loop time scaling of gradient dynamics

Hedy Attouch
Radu Bot
  • Fonction : Auteur
Khoa Nguyen
  • Fonction : Auteur

Résumé

In a Hilbert setting, for convex differentiable optimization, we develop a general framework for adaptive accelerated gradient methods. They are based on damped inertial dynamics where the coefficients are designed in a closed-loop way. Specifically, the damping is a feedback control of the velocity, or of the gradient of the objective function. For this, we develop a closed-loop version of the time scaling and averaging technique introduced by the authors. We thus obtain autonomous inertial dynamics which involve vanishing viscous damping and implicit Hessian driven damping. By simply using the convergence rates for the continuous steepest descent and Jensen's inequality, without the need for further Lyapunov analysis, we show that the trajectories have several remarkable properties at once: they ensure fast convergence of values, fast convergence of the gradients towards zero, and they converge to optimal solutions. Our approach leads to parallel algorithmic results, that we study in the case of proximal algorithms. These are among the very first general results of this type obtained using autonomous dynamics.
Fichier principal
Vignette du fichier
ABN-closed-loop-january-2-2023.pdf (461.33 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03924441 , version 1 (05-01-2023)

Identifiants

  • HAL Id : hal-03924441 , version 1

Citer

Hedy Attouch, Radu Bot, Khoa Nguyen. Fast convex optimization via closed-loop time scaling of gradient dynamics. 2023. ⟨hal-03924441⟩
19 Consultations
13 Téléchargements

Partager

Gmail Facebook X LinkedIn More