TLCFuse: Temporal Multi-Modality Fusion Towards Occlusion-Aware Semantic Segmentation
Résumé
In autonomous driving, addressing occlusion scenarios is crucial yet challenging. Robust surrounding perception is essential for handling occlusions and aiding navigation. Stateof-the-art models fuse LiDAR and Camera data to produce impressive perception results, but detecting occluded objects remains challenging. In this paper, we emphasize the crucial role of temporal cues in reinforcing resilience against occlusions in the bird's eye view (BEV) semantic grid segmentation task. We proposed a novel architecture that enables the processing of temporal multi-step inputs, where the input at each time step comprises the spatial information encoded from fusing LiDAR and camera sensor readings. We experimented on the realworld nuScenes dataset and our results outperformed other baselines, with particularly large differences when evaluating on occluded and partially-occluded vehicles. Additionally, we applied the proposed model to downstream tasks, such as multi-step BEV prediction and trajectory forecasting of the ego-vehicle. The qualitative results obtained from these tasks underscore the adaptability and effectiveness of our proposed approach.
Origine | Fichiers produits par l'(les) auteur(s) |
---|