TLCFuse: Temporal Multi-Modality Fusion Towards Occlusion-Aware Semantic Segmentation - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

TLCFuse: Temporal Multi-Modality Fusion Towards Occlusion-Aware Semantic Segmentation

Résumé

In autonomous driving, addressing occlusion scenarios is crucial yet challenging. Robust surrounding perception is essential for handling occlusions and aiding navigation. Stateof-the-art models fuse LiDAR and Camera data to produce impressive perception results, but detecting occluded objects remains challenging. In this paper, we emphasize the crucial role of temporal cues in reinforcing resilience against occlusions in the bird's eye view (BEV) semantic grid segmentation task. We proposed a novel architecture that enables the processing of temporal multi-step inputs, where the input at each time step comprises the spatial information encoded from fusing LiDAR and camera sensor readings. We experimented on the realworld nuScenes dataset and our results outperformed other baselines, with particularly large differences when evaluating on occluded and partially-occluded vehicles. Additionally, we applied the proposed model to downstream tasks, such as multi-step BEV prediction and trajectory forecasting of the ego-vehicle. The qualitative results obtained from these tasks underscore the adaptability and effectiveness of our proposed approach.

Fichier principal
Vignette du fichier
TLCFuse.pdf (5.06 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04717193 , version 1 (01-10-2024)

Licence

Identifiants

Citer

Gustavo Salazar-Gomez, Wenqian Liu, Manuel Alejandro Diaz-Zapata, David Sierra González, Christian Laugier. TLCFuse: Temporal Multi-Modality Fusion Towards Occlusion-Aware Semantic Segmentation. IV 2024 - 35th IEEE Intelligent Vehicles Symposium, Jun 2024, Jeju Island, South Korea. pp.2110-2116, ⟨10.1109/IV55156.2024.10588460⟩. ⟨hal-04717193⟩
25 Consultations
13 Téléchargements

Altmetric

Partager

More