Accelerated Gradient Methods Combining Tikhonov Regularization with Geometric Damping Driven by the Hessian
Résumé
In a Hilbert framework, for general convex differentiable optimization, we consider accelerated gradient dynamics combining Tikhonov regularization with Hessian-driven damping. The temporal discretization of these dynamics leads to a new class of first-order optimization algorithms with favorable properties. The Tikhonov regularization parameter is assumed to tend to zero as time tends to infinity, which preserves equilibria. The presence of the Tikhonov regularization term induces a strong property of convexity which vanishes asymptotically. To take advantage of the fast convergence rates attached to the heavy ball method in the strongly convex case, we consider inertial dynamics where the viscous damping coefficient is proportional to the square root of the Tikhonov regularization parameter, and hence converges to zero. The geometric damping, controlled by the Hessian of the function to be minimized, induces attenuation of the oscillations. Under an appropriate setting of the parameters, based on Lyapunov's analysis, we show that the trajectories provide at the same time several remarkable properties: fast convergence of values, fast convergence of gradients towards zero, and strong convergence to the minimum norm minimizer. We show that the corresponding proximal algorithms share the same properties as continuous dynamics. The numerical illustrations confirm the results obtained. This study extends a previous paper by the authors regarding similar problems without the presence of Hessian driven damping.