Mirror Descent with Relative Smoothness in Measure Spaces, with application to Sinkhorn and EM - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Mirror Descent with Relative Smoothness in Measure Spaces, with application to Sinkhorn and EM

Résumé

Many problems in machine learning can be formulated as optimizing a convex functional over a vector space of measures. This paper studies the convergence of the mirror descent algorithm in this infinite-dimensional setting. Defining Bregman divergences through directional derivatives, we derive the convergence of the scheme for relatively smooth and convex pairs of functionals. Such assumptions allow to handle non-smooth functionals such as the Kullback--Leibler (KL) divergence. Applying our result to joint distributions and KL, we show that Sinkhorn's primal iterations for entropic optimal transport in the continuous setting correspond to a mirror descent, and we obtain a new proof of its (sub)linear convergence. We also show that Expectation Maximization (EM) can always formally be written as a mirror descent. When optimizing only on the latent distribution while fixing the mixtures parameters -- which corresponds to the Richardson--Lucy deconvolution scheme in signal processing -- we derive sublinear rates of convergence.

Dates et versions

hal-03811583 , version 1 (12-10-2022)

Identifiants

Citer

Pierre-Cyril Aubin-Frankowski, Anna Korba, Flavien Léger. Mirror Descent with Relative Smoothness in Measure Spaces, with application to Sinkhorn and EM. NeurIPS 2022 - Thirty-sixth Conference on Neural Information Processing Systems, Nov 2022, New Orleans, United States. ⟨hal-03811583⟩
57 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More