Restarting Frank-Wolfe - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Journal of Optimization Theory and Applications Année : 2022

Restarting Frank-Wolfe

Résumé

Conditional Gradients (aka Frank-Wolfe algorithms) form a classical set of methods for constrained smooth convex minimization due to their simplicity, the absence of projection step, and competitive numerical performance. While the vanilla Frank-Wolfe algorithm only ensures a worst-case rate of O(1/epsilon), various recent results have shown that for strongly convex functions, the method can be slightly modified to achieve linear convergence. However, this still leaves a huge gap between sublinear O(1/epsilon) convergence and linear O(log(1/epsilon)) convergence to reach an $\epsilon$-approximate solution. Here, we present a new variant of Conditional Gradients, that can dynamically adapt to the function's geometric properties using restarts and thus smoothly interpolates between the sublinear and linear regimes.

Dates et versions

hal-01893922 , version 1 (11-10-2018)

Identifiants

Citer

Thomas Kerdreux, Alexandre d'Aspremont, Sebastian Pokutta. Restarting Frank-Wolfe. Journal of Optimization Theory and Applications, 2022, 192 (3), pp.799-829. ⟨hal-01893922⟩
126 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More