Self-supervised spatio-temporal representation learning of Satellite Image Time Series
Résumé
In this paper, a new self-supervised strategy for learning meaningful representations of complex optical Satellite Image Time Series (SITS) is presented. The methodology proposed named U-BARN, a Unet-BERT spAtio-temporal Representation eNcoder, exploits irregularly sampled SITS. The designed architecture allows learning rich and discriminative features from unlabeled data, enhancing the synergy between the spatio-spectral and the temporal dimensions. To train on unlabeled data, a time series reconstruction pretext task inspired by the BERT strategy is proposed. A Sentinel-2 large-scale unlabeled data-set is used to pre-train U-BARN. To demonstrate its feature learning capability, representations of SITS encoded by U-BARN are then fed into a shallow classifier to generate semantic segmentation maps. Experimental results are conducted on a labeled data-set (PASTIS). Two ways of exploiting U-BARN pre-training are considered: either U-BARN weights are frozen (named U-BARN FR) or fine-tuned (U-BARN FT). The obtained results demonstrate that representations of SITS given by U-BARN FR are more efficient for land cover classification than those of a supervised-trained linear layer. Then, we observe in scenarios with scarce reference data-set that the fine-tuning brings a significative performance gain compared to fully-supervised approaches. We also investigate the influence of the percentage of element masked during pre-training on the quality of the SITS representation. Eventually, semantic segmentation performances show that the fully supervised U-BARN architecture reaches slightly better performances than the spatio-temporal baseline (U-TAE).
Origine | Fichiers produits par l'(les) auteur(s) |
---|