Empirical Risk Minimization with Generalized Relative Entropy Regularization
Minimisation du Risque Empirique avec Régularisation par l’Entropie Relative Généralisée
Résumé
The empirical risk minimization problem with relative entropy regularization (ERM-RER) is investigated considering that the reference measure is a σ-finite measure instead of a probability measure. This generalization allows for a larger degree of flexibility in the incorporation of prior knowledge over the set of models. In this setting, the interplay between the regularization parameter, the reference measure, the risk function, and the expected empirical risk induced by the solution of the ERM-RER problem, which is proved to be unique, is characterized. For a fixed dataset, the empirical risk is shown to be a sub-Gaussian random variable, when the models follow the probability measure that is the solution to the ERM-RER problem. The sensitivity of the expected empirical risk to deviations from the solution of the ERM-RER problem is studied and upper and lower bounds on the expected empirical risk are provided. Finally, it is shown that the expectation of the sensitivity is upper bounded, up to a constant factor, by the square root of the lautum information between the models and the datasets.
Origine | Fichiers produits par l'(les) auteur(s) |
---|