Edge-Conditioned Feature Transform Network for Hyperspectral and Multispectral Image Fusion
Résumé
Despite recent advances achieved by deep learning techniques in the fusion of low-spatial-resolution hyperspectral image (LR-HSI) and high-spatial-resolution multispectral image (HR-MSI), it remains a challenge to reconstruct the high-spatial-resolution HSI (HR-HSI) with more accurate spatial details and less spectral distortions, since the low-level structure information such as sharp edges tends to be weakened or lost as the network depth grows. To tackle this issue, we creatively propose an edge-conditioned feature transform network (EC-FTN) in this article, which is mainly composed of three parts, namely, feature extraction network (FEN), feature fusion and transformation network (FFTN), and image reconstruction network (IRN). First, two computationally efficient FENs with 3-D convolutions and reshaping layers are employed to extract the joint spectral-spatial features of input images. Then, the FFTN conditioned on the edge map prior can fuse and transform the features adaptively, in which a fusion node and several cascaded feature modulation modules (FMMs) equipped with feature-wise modulation layers are constructed. Specifically, the edge map is generated via transfer learning, i.e., by applying the Sobel operator to feature maps of the red-green-blue (RGB) version of HR-MSI resulting from the pretrained VGG16 model without extra training. Finally, the desired HR-HSI is recovered from the transformed features through IRN. Furthermore, we elaborately design a weighted combinatorial loss function consisting of mean absolute error, image gradient difference, and spectral angle terms to guide the training. Experiments on both ground-based and remotely sensed datasets demonstrate that our EC-FTN outperforms state-of-the-art methods in visual and quantitive evaluations, as well as in fine details reconstruction.