Neural networks based learning applied to extreme statistics and sampling rare events
Résumé
Feedforward neural networks based on Rectified linear units (ReLU) cannot efficiently approximate quantile functions which are not bounded, especially in the case of heavy-tailed distributions. This is a major issue for any use in quantile approximation or sampling scheme (such as generative modelling). In this talk we design and compare several new parametrizations for the quantiles (and for the generator of a Generative adversarial network) adapted to this framework, basing on extreme-value theory. We provide theoretical convergence results (as the complexity of the NN increases) and we illustrate the resulting methodologies with experimentations on both simulated data and real data.