Convergence and Recovery Guarantees of Unsupervised Neural Networks for Inverse Problems - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2024

Convergence and Recovery Guarantees of Unsupervised Neural Networks for Inverse Problems

Jalal M. Fadili
Yvain Quéau

Résumé

Neural networks have become a prominent approach to solve inverse problems in recent years. While a plethora of such methods was developed to solve inverse problems empirically, we are still lacking clear theoretical guarantees for these methods. On the other hand, many works proved convergence to optimal solutions of neural networks in a more general setting using overparametrization as a way to control the Neural Tangent Kernel. In this work we investigate how to bridge these two worlds and we provide deterministic convergence and recovery guarantees for the class of unsupervised feedforward multilayer neural networks trained to solve inverse problems. We also derive overparametrization bounds under which a two-layers Deep Inverse Prior network with smooth activation function will benefit from our guarantees.
Fichier principal
Vignette du fichier
DIP_JMIV_Experimental-1.pdf (670.96 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04059168 , version 1 (22-09-2023)
hal-04059168 , version 2 (20-10-2023)
hal-04059168 , version 3 (15-03-2024)

Identifiants

Citer

Nathan Buskulic, Jalal M. Fadili, Yvain Quéau. Convergence and Recovery Guarantees of Unsupervised Neural Networks for Inverse Problems. 2024. ⟨hal-04059168v3⟩
106 Consultations
58 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More