Friction-adaptive descent: a family of dynamics-based optimization methods
Résumé
We describe a family of descent algorithms which generalizes common existing schemes used in applications such as neural network training and more broadly for optimization of smooth functions--potentially for global optimization, or as a local optimization method to be deployed within global optimization schemes like basin hopping. By introducing an auxiliary degree of freedom we create a dynamical system with improved stability, reducing oscillatory modes and accelerating convergence to minima. The resulting algorithms are simple to implement and control, and convergence can be shown directly by Lyapunov's second method. Although this framework, which we refer to as friction-adaptive descent (FAD), is fairly general, we focus most of our attention here on a specific variant: kinetic energy stabilization (which can be viewed as a zero-temperature Nose-Hoover scheme but with added dissipation in both physical and auxiliary variables), termed KFAD (kinetic FAD). To illustrate the flexibility of the FAD framework we consider several other methods. in certain asymptotic limits, these methods can be viewed as introducing cubic damping in various forms; they can be more efficient than linearly dissipated Hamiltonian dynamics in common optimization settings. We present details of the numerical methods and show convergence for both the continuous and discretized dynamics in the convex setting by constructing Lyapunov functions. The methods are tested using a toy model (the Rosenbrock function). We also demonstrate the methods for structural optimization for atomic clusters in Lennard-Jones and Morse potentials. The experiments show the relative efficiency and robustness of FAD in comparison to linearly dissipated Hamiltonian dynamics.