Convergence Guarantees of Overparametrized Wide Deep Inverse Prior - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Convergence Guarantees of Overparametrized Wide Deep Inverse Prior

Convergence Guarantees of Overparametrized Wide Deep Inverse Prior

Résumé

Neural networks have become a prominent approach to solve inverse problems in recent years. Amongst the different existing methods, the Deep Image/Inverse Priors (DIPs) technique is an unsupervised approach that optimizes a highly overparametrized neural network to transform a random input into an object whose image under the forward model matches the observation. However, the level of overparametrization necessary for such methods remains an open problem. In this work, we aim to investigate this question for a two-layers neural network with a smooth activation function. We provide overparametrization bounds under which such network trained via continuous-time gradient descent will converge exponentially fast with high probability which allows to derive recovery prediction bounds. This work is thus a first step towards a theoretical understanding of overparametrized DIP networks, and more broadly it participates to the theoretical understanding of neural networks in inverse problem settings.
Fichier principal
Vignette du fichier
Results_DIP.pdf (468.96 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04040724 , version 1 (22-03-2023)

Identifiants

Citer

Nathan Buskulic, Yvain Quéau, Jalal Fadili. Convergence Guarantees of Overparametrized Wide Deep Inverse Prior. 9th International Conference on Scale Space and Variational Methods in Computer Vision (SSVM 2023), May 2023, Santa Margherita di Pula, Italy. pp.406-417, ⟨10.1007/978-3-031-31975-4_31⟩. ⟨hal-04040724⟩
34 Consultations
14 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More