Have Foundational Models Seen Satellite Images?
Résumé
This paper presents an investigation into the zero-shot performance of pre-trained foundation models on remote sensing tasks. Recent advances in self-supervised learning suggest that these models, when trained on vast amounts of unsupervised data, could potentially improve generalization across a number of downstream tasks. Our study offers an empirical evaluation of these models on standard remote-sensing benchmarks such as EuroSAT and BigEarthNet-S2, with the intent to confirm whether these models have encountered satellite imagery during their training phase. Moreover, we examine the impact of adding a geospatial domain-specific textual description of classes, contrasting it with the standard class-based prompts. Our findings indicate that the fine-tuned BLIP models exhibit superior zeroshot performance on these benchmarks compared to their standard counterparts, signifying that fine-tuning on standard benchmarks enhances performance. Furthermore, the addition of geospatial context variably influences performance depending on the specific model and dataset. This work provides crucial insights into the applicability of foundation models in remote sensing tasks and lays the groundwork for further research.
Origine | Fichiers produits par l'(les) auteur(s) |
---|