OmniSat: Self-Supervised Modality Fusion for Earth Observation - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

OmniSat: Self-Supervised Modality Fusion for Earth Observation

Résumé

The diversity and complementarity of sensors available for Earth Observations (EO) calls for developing bespoke self-supervised multimodal learning approaches. However, current multimodal EO datasets and models typically focus on a single data type, either mono-date images or time series, which limits their impact. To address this issue, we introduce OmniSat, a novel architecture able to merge diverse EO modalities into expressive features without labels by exploiting their alignment. To demonstrate the advantages of our approach, we create two new multimodal datasets by augmenting {existing ones} with new modalities. As demonstrated for three downstream tasks---forestry, land cover classification, and crop mapping---OmniSat can learn rich representations without supervision, leading to state-of-the-art performances in semi- and fully supervised settings. Furthermore, our multimodal pretraining scheme improves performance even when only one modality is available for inference. The code and dataset are available at The diversity and complementarity of sensors available for Earth Observations (EO) calls for developing bespoke self-supervised multimodal learning approaches. However, current multimodal EO datasets and models typically focus on a single data type, either mono-date images or time series, which limits their impact. To address this issue, we introduce OmniSat, a novel architecture able to merge diverse EO modalities into expressive features without labels by exploiting their alignment. To demonstrate the advantages of our approach, we create two new multimodal datasets by augmenting {existing ones} with new modalities. As demonstrated for three downstream tasks—forestry, land cover classification, and crop mapping—OmniSat can learn rich representations without supervision, leading to state-of-the-art performances in semi- and fully supervised settings. Furthermore, our multimodal pretraining scheme improves performance even when only one modality is available for inference. The code and dataset are available at https://github.com/gastruc/OmniSat.
Fichier principal
Vignette du fichier
Cross_modal___ECCV2024 (1).pdf (7.53 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04556598 , version 1 (23-04-2024)
hal-04556598 , version 2 (12-07-2024)

Licence

Domaine public

Identifiants

Citer

Guillaume Astruc, Nicolas Gonthier, Clement Mallet, Loic Landrieu. OmniSat: Self-Supervised Modality Fusion for Earth Observation. ECCV, 2024, Milan (Italie), Italy. ⟨hal-04556598v2⟩
126 Consultations
50 Téléchargements

Altmetric

Partager

More