OmniSat: Self-Supervised Modality Fusion for Earth Observation - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2024

OmniSat: Self-Supervised Modality Fusion for Earth Observation

Résumé

The field of Earth Observations (EO) offers a wealth of data from diverse sensors, presenting a great opportunity for advancing self-supervised multimodal learning. However, current multimodal EO datasets and models focus on a single data type, either mono-date images or time series, which limits their expressivity. We introduce OmniSat, a novel architecture that exploits the spatial alignment between multiple EO modalities to learn expressive multimodal representations without labels. To demonstrate the advantages of combining modalities of different natures, we augment two existing datasets with new modalities. As demonstrated on three downstream tasks: forestry, land cover classification, and crop mapping. OmniSat can learn rich representations in an unsupervised manner, leading to improved performance in the semi- and fully-supervised settings, even when only one modality is available for inference. The code and dataset are available at github.com/gastruc/OmniSat.
Fichier principal
Vignette du fichier
2404.08351.pdf (7.93 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04556598 , version 1 (23-04-2024)
hal-04556598 , version 2 (12-07-2024)

Identifiants

Citer

Guillaume Astruc, Nicolas Gonthier, Clement Mallet, Loic Landrieu. OmniSat: Self-Supervised Modality Fusion for Earth Observation. 2024. ⟨hal-04556598v1⟩

Collections

PARISTECH
126 Consultations
50 Téléchargements

Altmetric

Partager

More