Asymptotic Analysis of Meta-learning as a Recommendation Problem
Résumé
Meta-learning tackles various means of learning from past tasks to perform new tasks better. In this paper, we focus on one particular statement of meta-learning: learning to recommend algorithms. We focus on a finite number of algorithms, which can be executed on tasks drawn i.i.d. according to a "meta-distribution". We are interested in generalization performance of meta-predict strategies, i.e., the expected algorithm performances on new tasks drawn from the same meta-distribution. Assuming the perfect knowledge of the meta-distribution (i.e., in the limit of a very large number of training tasks), we ask ourselves under which conditions algorithm recommendation can benefit from meta-learning, and thus, in some sense, "defeat" the No-Free-Lunch theorem. We analyze four meta-predict strategies: Random, Mean, Greedy and Optimal. We identify optimality conditions for such strategies. We also define a notion of meta-learning complexity as the cardinal of the minimal clique of complementary algorithms. We illustrate our findings on experiments conducted on artificial and real data.
Fichier principal
PMLR_AAAI2020_Meta_learning_paper___draft_version_0022.pdf (2.06 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|