Global Random Maximization of Feedforward Neural Network
Résumé
This paper addresses the problem of maximizing functions expressed as a sum of independent terms on a bounded closed domain in R^d. A common approach is to first get a regressed approximation of these functions using neural networks. In this contribution, we propose estimating the maximum by empirically sampling the neural network approximator's output using independent and uniformly distributed inputs. We consider two metrics for quantifying the discrepancy between the sample maximum and the neural network's real maximum: asymptotic distribution and mean-square error. The convergence rate estimation is influenced by the shape of the neural network's output distribution around its maximum point. In some cases, and under minimal assumptions, the convergence rate becomes dimension-dependent. However, with additional assumptions, the convergence rate is free from the curse of dimensionality. The practical implementation of a canonical example illustrates how embracing an estimation bias can substantially enhance the convergence rate. The latter approach paves the way for new theoretical and algorithmic solutions.
Origine | Fichiers produits par l'(les) auteur(s) |
---|