InfraParis: A multi-modal and multi-task autonomous driving dataset - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

InfraParis: A multi-modal and multi-task autonomous driving dataset

Résumé

Current deep neural networks (DNNs) for autonomous driving computer vision are typically trained on specific datasets that only involve a single type of data and urban scenes. Consequently, these models struggle to handle new objects, noise, nighttime conditions, and diverse scenarios, which is essential for safety-critical applications. Despite ongoing efforts to enhance the resilience of computer vision DNNs, progress has been sluggish, partly due to the absence of benchmarks featuring multiple modalities. We introduce a novel and versatile dataset named InfraParis that supports multiple tasks across three modalities: RGB, depth, and infrared. We assess various state-of-the-art baseline techniques, encompassing models for the tasks of semantic segmentation, object detection, and depth estimation. More visualizations and the download link for InfraParis are available at https://ensta-u2is.github.io/infraParis/

Dates et versions

hal-04321062 , version 1 (04-12-2023)

Licence

Identifiants

Citer

Gianni Franchi, Marwane Hariat, Xuanlong Yu, Nacim Belkhir, Antoine Manzanera, et al.. InfraParis: A multi-modal and multi-task autonomous driving dataset. WACV 2024 - IEEE/CVF Winter Conference on Applications of Computer Vision, Jan 2024, Waikoloa, United States. ⟨hal-04321062⟩
118 Consultations
0 Téléchargements

Altmetric

Partager

More