Multimodal GANs: Toward Crossmodal Hyperspectral–Multispectral Image Segmentation
Résumé
This article addresses the problem of semantic segmentation with limited cross-modality data in large-scale urban scenes. Most prior works have attempted to address this issue by using multimodal deep neural networks (DNNs). However, their ability to effectively blending different properties across multimodalities and robustly learning representations from complex scenes remains limited, particularly in the absence of sufficient and well-annotated training images. This leads to a challenge related to cross-modality learning with multimodal DNNs. To this end, we introduce two novel plug-and-play units in the network: self-generative adversarial networks (GANs) module and mutual-GANs module, to learn perturbation-insensitive feature representations and to eliminate the gap between multimodalities, respectively, yielding more effective and robust information transfer. Furthermore, a patchwise progressive training strategy is devised to enable effective network learning with limited samples. We evaluate the proposed network on two multimodal (hyperspectral and multispectral) overhead image data sets and achieve a significant improvement in comparison with several state-of-the-art methods.