Transformer fusion for indoor RGB-D semantic segmentation - Archive ouverte HAL
Article Dans Une Revue Computer Vision and Image Understanding Année : 2024

Transformer fusion for indoor RGB-D semantic segmentation

Guillaume Allibert
Christophe Stolz
Chao Ma
  • Fonction : Auteur
  • PersonId : 1113147

Résumé

Fusing geometric cues with visual appearance is an imperative theme for RGB-D indoor semantic segmentation. Existing methods commonly adopt convolutional modules to aggregate multi-modal features, paying little attention to explicitly leveraging the long-range dependencies in feature fusion. Therefore, it is challenging for existing methods to accurately segment objects with large-scale variations. In this paper, we propose a novel transformer-based fusion scheme, named TransD-Fusion, to better model contextualized awareness. Specifically, TransD-Fusion consists of a self-refinement module, a calibration scheme with cross-interaction, and a depth-guided fusion. The objective is to first improve modality-specific features with self-and cross-attention, and then explore the geometric cues to better segment objects sharing a similar visual appearance. Additionally, our transformer fusion benefits from a semantic-aware position encoding which spatially constrains the attention to neighboring pixels. Extensive experiments on RGB-D benchmarks demonstrate that the proposed method performs well over the state-of-the-art methods by large margins.
Fichier principal
Vignette du fichier
CVIU2024_Clean (1).pdf (3.6 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04714659 , version 1 (30-09-2024)

Identifiants

Citer

Zongwei Wu, Zhuyun Zhou, Guillaume Allibert, Christophe Stolz, Cédric Demonceaux, et al.. Transformer fusion for indoor RGB-D semantic segmentation. Computer Vision and Image Understanding, 2024, 249, pp.104174. ⟨10.1016/j.cviu.2024.104174⟩. ⟨hal-04714659⟩
17 Consultations
8 Téléchargements

Altmetric

Partager

More