Extending the proximal point algorithm beyond convexity
Résumé
Introduced in the 1970's by Martinet for minimizing convex functions and extended shortly afterwards by Rockafellar towards monotone inclusion problems, the proximal point algorithm turned out to be a viable computational method for solving various classes of (structured) optimization problems even beyond the convex framework.
In this talk we discuss some extensions of proximal point type algorithms beyond convexity. First we propose a relaxed-inertial proximal point type algorithm for solving optimization problems consisting in minimizing strongly quasiconvex functions whose variables lie in finitely dimensional linear subspaces, that can be extended to equilibrium functions involving such functions.
Then we briefly discuss another generalized convexity notion for functions we called prox-convexity for which the proximity operator is single-valued and firmly nonexpansive, and see that the standard proximal point algorithm and Malitsky’s Golden Ratio Algorithm (originally proposed for solving convex mixed variational inequalities) remain convergent when the involved functions are taken prox-convex, too.