Concentration of Non-Isotropic Random Tensors with Applications to Learning and Empirical Risk Minimization
Résumé
Dimension is an inherent bottleneck to some modern learning tasks, where optimization methods suffer from the size of the data. In this paper, we study non-isotropic distributions of data and develop tools that aim at reducing these dimensional costs by a dependency on an effective dimension rather than the ambient one. Based on non-asymptotic estimates of the metric entropy of ellipsoids-that prove to generalize to infinite dimensions-and on a chaining argument, our uniform concentration bounds involve an effective dimension instead of the global dimension, improving over existing results. We show the importance of taking advantage of non-isotropic properties in learning problems with the following applications: i) we improve state-of-the-art results in statistical preconditioning for communication-efficient distributed optimization, ii) we introduce a non-isotropic randomized smoothing for nonsmooth optimization. Both applications cover a class of functions that encompasses empirical risk minization (ERM) for linear models.
Mots clés
Effective Dimension
Large Deviation
Chaining Method
Metric Entropy
Ellipsoids
Random Tensors
Statistical Preconditioning
Smoothing Technique
Machine Learning (stat.ML)
Machine Learning (cs.LG)
Probability (math.PR)
Statistics Theory (math.ST)
FOS: Computer and information sciences
FOS: Mathematics
60E15
60B20
60F10
Origine | Fichiers produits par l'(les) auteur(s) |
---|