Scaling by subsampling for big data, with applications to statistical learning
Résumé
Handling large datasets and calculating complex statistics on huge
datasets require important computing resources. Using subsampling methods
to calculate statistics of interest on small samples is often used in practice
to reduce computational complexity, such as the divide and conquer strategy.
In this article, we recall some results on subsampling distributions and derive a precise rate of convergence for these quantities and the corresponding
quantiles. We also develop some standardization techniques based on subsampling unstandardized statistics in the framework of large datasets. It is argued that using several subsampling distributions with different subsampling sizes brings a lot of information on the behavior of statistical learning procedures:
subsampling allows to estimate the rate of convergence of different algorithms, to estimate the variability of complex statistics, to estimate confidence intervals for out-of-sample errors and interpolate their value at larger scales. These
results are illustrated on simulations, but also on two important datasets, frequently analyzed in the statistical learning community, EMNIST (recognition
of digits) and VeReMi (analysis of Network Vehicular Reference Misbehavior)
Fichier principal
Scaling_by_subsampling_for_big_data__with_applications_to_statistical_learning_2.pdf (816.45 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|