Deep neural networks-based relevant latent representation learning for hyperspectral image classification
Résumé
The classification of hyperspectral image is a challenging task due to the high dimensional space, with large number of spectral bands, and low number of labeled training samples. To overcome these challenges, we propose a novel methodology for hyperspectral image classification based on multi-view deep neural networks which fuses both spectral and spatial features by using only a small number of labeled samples. Firstly, we process the initial hyperspectral image in order to extract a set of spectral and spatial features. Each spectral vector is the spectral signature of each pixel of the image. The spatial features are extracted using a simple deep autoencoder, which seeks to reduce the high dimensionality of data taking into account the neighborhood region for each pixel. Secondly, we propose a multi-view deep autoencoder model that allows fusing the spectral and spatial features extracted from the hyperspectral image into a joint latent representation space. Finally, a semi-supervised graph convolutional network is trained based on the fused latent representation space to perform the hyperspectral image classification. The main advantage of the proposed approach is to allow the automatic extraction of relevant information while preserving the spatial and spectral features of data, and improve the classification of hyperspectral images even when the number of labeled samples is low. Experiments are conducted on three real hyperspectral images respectively Indian Pines, Salinas, and Pavia University datasets. Results show that the proposed approach is competitive in classification performances compared to the state-of-the-art.
Origine | Fichiers produits par l'(les) auteur(s) |
---|