Learning extreme expected shortfall with neural networks
Résumé
New parameterizations for neural networks are proposed in order to estimate extreme Expected Shortfall in heavy-tailed settings. All proposed neural network estimators feature a bias correction based on an extension of the usual second-order condition to an arbitrary order. The convergence rate of the uniform error between extreme log-ES and their neural network approximation is established. Again, the rate depends on the order parameters which drive the bias in most extreme estimators. The finite sample performance of the neural network estimator is compared to other bias-reduced extreme-value competitors on simulated data. It is shown that the method outperforms in difficult heavy-tailed situations where other estimators almost all fail. Finally, the neural network estimator is implemented to investigate the behavior of cryptocurrency extreme loss returns.