Are almost non-negative neural networks universal approximators? - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

Are almost non-negative neural networks universal approximators?

Résumé

Non-negatively weighted neural networks (NNs) have proven instrumental in various applications, offering interpretability and mitigating overfitting concerns. However, this advantage often comes at the expense of the expressivity of the model. In this paper, we show that almost non-negative neural networks allow us to waive this limitation. More specifically, we introduce a novel class of almost non-negative neural networks, that have a particular algebraic structure, for which we recover the universal approximation properties. Furthermore, to quantify the robustness of such a network architecture, we demonstrate the feasibility of deriving tight Lipschitz bounds, which are computationally efficient. To validate our approach, we conduct various classification experiments on a benchmark dataset of medical images. The results underscore the validity of our theoretical results.
Fichier principal
Vignette du fichier
ABBA_MLSP (1).pdf (799.46 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04698445 , version 1 (16-09-2024)

Licence

Identifiants

  • HAL Id : hal-04698445 , version 1

Citer

Vlad Vasilescu, Ana Neacsu, Jean-Christophe Pesquet. Are almost non-negative neural networks universal approximators?. MLSP 2024 - IEEE International Workshop on Machine Learning for Signal Processing Search form Search, Sep 2024, London, United Kingdom. ⟨hal-04698445⟩
88 Consultations
74 Téléchargements

Partager

More