A Semantic-Guided LiDAR-Vision Fusion Approach for Moving Objects Segmentation and State Estimation - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

A Semantic-Guided LiDAR-Vision Fusion Approach for Moving Objects Segmentation and State Estimation

Résumé

Moving Objects Segmentation (MOS) is critical and indispensable for secure intelligent vehicle operation in the dynamic environment. For the state estimation task which is based on the assumption of static surroundings, to identify and filter out the moving objects plays an important role in robust ego-motion estimation. In this paper, a LiDAR-Vision fusion approach is developed to segment moving objects in the scene, which utilizes the LiDAR-based semantic segmentation as a prior and vision-based geometric information for validation. The effectiveness of our approach to segment moving objects is highlighted by the comparison with the traditional robust kernel-based outlier rejection methods. Our approach is benchmarked with three city category sequences in the KITTI dataset, which outperforms the kernel-based methods and achieves the leading results of 77.9% average fitness and 7.65 cm RMSE respectively.
Fichier principal
Vignette du fichier
IEEE_ITSC_2022_Songming_HAL.pdf (4.42 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03940221 , version 1 (16-01-2023)

Identifiants

Citer

Songming Chen, Haixin Sun, Vincent Frémont. A Semantic-Guided LiDAR-Vision Fusion Approach for Moving Objects Segmentation and State Estimation. 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), Oct 2022, Macau, China. pp.4308-4313, ⟨10.1109/ITSC55140.2022.9922443⟩. ⟨hal-03940221⟩
20 Consultations
64 Téléchargements

Altmetric

Partager

More