Are Large-scale Datasets Necessary for Self-Supervised Pre-training? - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2022

Are Large-scale Datasets Necessary for Self-Supervised Pre-training?

Résumé

Pre-training models on large scale datasets, like Ima-geNet, is a standard practice in computer vision. This paradigm is especially effective for tasks with small training sets, for which high-capacity models tend to overfit. In this work, we consider a self-supervised pre-training scenario that only leverages the target task data. We consider datasets, like Stanford Cars, Sketch or COCO, which are order(s) of magnitude smaller than Imagenet. Our study shows that denoising autoencoders, such as BEiT or a variant that we introduce in this paper, are more robust to the type and size of the pre-training data than popular self-supervised methods trained by comparing image embeddings. We obtain competitive performance compared to ImageNet pre-training on a variety of classification datasets, from different domains. On COCO, when pretraining solely using COCO images, the detection and instance segmentation performance surpasses the supervised ImageNet pre-training in a comparable setting.
Fichier principal
Vignette du fichier
splitmask_haltools.pdf (577.9 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03572721 , version 1 (14-02-2022)

Identifiants

  • HAL Id : hal-03572721 , version 1

Citer

Alaaeldin El-Nouby, Gautier Izacard, Hugo Touvron, Ivan Laptev, Hervé Jégou, et al.. Are Large-scale Datasets Necessary for Self-Supervised Pre-training?. 2022. ⟨hal-03572721⟩
203 Consultations
265 Téléchargements

Partager

Gmail Facebook X LinkedIn More