Super Resolution Guided Deep Network for Land Cover Classification From Remote Sensing Images
Résumé
The low resolution of remote sensing images often limits the land cover classification (LCC) performance. Super resolution (SR) can improve the image resolution, while greatly increasing the computational burden for the LCC due to the larger size of the input image. In this article, the SR-guided deep network (SRGDN) framework is proposed, which can generate meaningful structures from higher resolution images to improve the LCC performance without consuming more computational costs. In general, the SRGDN consists of two branches (i.e., SR branch and LCC branch) and a guidance module. The SR branch aims to increase the resolution of remote sensing images. Since high- and low-resolution image pairs cannot be directly provided by imaging sensors to train the SR branch, we introduce a self-supervised generative adversarial network (GAN) to estimate the downsampling kernel that can produce these image pairs. The LCC branch adopts the high-resolution network (HRNet) to retain as much resolution information with a few downsampling operations as possible. The guidance module teaches the LCC branch to learn the high-resolution information from the SR branch without the utilization of the higher-resolution images as the inputs. Furthermore, the guidance module introduces spatial pyramid pooling (SPP) to match the feature maps of different sizes in the two branches. In the testing stage, the guidance module and SR branch can be removed, and therefore do not create additional computational costs. Experimental results on three real datasets demonstrate the superiority of the proposed method over several well-known LCC approaches.