Are almost non-negative neural networks universal approximators?
Résumé
Non-negatively weighted neural networks (NNs) have proven instrumental in various applications, offering interpretability and mitigating overfitting concerns. However, this advantage often comes at the expense of the expressivity of the model. In this paper, we show that almost non-negative neural networks allow us to waive this limitation. More specifically, we introduce a novel class of almost non-negative neural networks, that have a particular algebraic structure, for which we recover the universal approximation properties. Furthermore, to quantify the robustness of such a network architecture, we demonstrate the feasibility of deriving tight Lipschitz bounds, which are computationally efficient. To validate our approach, we conduct various classification experiments on a benchmark dataset of medical images. The results underscore the validity of our theoretical results.
Origine | Fichiers produits par l'(les) auteur(s) |
---|