Multimodal GANs: Toward Crossmodal Hyperspectral–Multispectral Image Segmentation - Archive ouverte HAL
Article Dans Une Revue IEEE Transactions on Geoscience and Remote Sensing Année : 2021

Multimodal GANs: Toward Crossmodal Hyperspectral–Multispectral Image Segmentation

Résumé

This article addresses the problem of semantic segmentation with limited cross-modality data in large-scale urban scenes. Most prior works have attempted to address this issue by using multimodal deep neural networks (DNNs). However, their ability to effectively blending different properties across multimodalities and robustly learning representations from complex scenes remains limited, particularly in the absence of sufficient and well-annotated training images. This leads to a challenge related to cross-modality learning with multimodal DNNs. To this end, we introduce two novel plug-and-play units in the network: self-generative adversarial networks (GANs) module and mutual-GANs module, to learn perturbation-insensitive feature representations and to eliminate the gap between multimodalities, respectively, yielding more effective and robust information transfer. Furthermore, a patchwise progressive training strategy is devised to enable effective network learning with limited samples. We evaluate the proposed network on two multimodal (hyperspectral and multispectral) overhead image data sets and achieve a significant improvement in comparison with several state-of-the-art methods.
Fichier non déposé

Dates et versions

hal-03429662 , version 1 (15-11-2021)

Identifiants

Citer

Danfeng Hong, Jing Yao, Deyu Meng, Zongben Xu, Jocelyn Chanussot. Multimodal GANs: Toward Crossmodal Hyperspectral–Multispectral Image Segmentation. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59 (6), pp.5103-5113. ⟨10.1109/TGRS.2020.3020823⟩. ⟨hal-03429662⟩
149 Consultations
0 Téléchargements

Altmetric

Partager

More