Learning to solve TV regularised problems with unrolled algorithms - Archive ouverte HAL
Communication Dans Un Congrès Année : 2020

Learning to solve TV regularised problems with unrolled algorithms

Résumé

Total Variation (TV) is a popular regularization strategy that promotes piece-wise constant signals by constraining the ℓ1-norm of the first order derivative of the estimated signal. The resulting optimization problem is usually solved using iterative algorithms such as proximal gradient descent, primal-dual algorithms or ADMM. However, such methods can require a very large number of iterations to converge to a suitable solution. In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems. While this could be done using the synthesis formulation, we demonstrate that this leads to slower performances. The main difficulty in applying such methods in the analysis formulation lies in proposing a way to compute the derivatives through the proximal operator. As our main contribution, we develop and characterize two approaches to do so, describe their benefits and limitations, and discuss the regime where they can actually improve over iterative procedures. We validate those findings with experiments on synthetic and real data.
Fichier principal
Vignette du fichier
Learning_to_solve_TV_regularised_problems_with_unrolled_algorithms.pdf (714.02 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02954181 , version 1 (30-09-2020)
hal-02954181 , version 2 (19-10-2020)

Identifiants

  • HAL Id : hal-02954181 , version 1

Citer

Hamza Cherkaoui, Jeremias Sulam, Thomas Moreau. Learning to solve TV regularised problems with unrolled algorithms. Advances in Neural Information Processing Systems (NeurIPS), Dec 2020, Vancouver, Canada. ⟨hal-02954181v1⟩
391 Consultations
620 Téléchargements

Partager

More