N-QGN: Navigation Map from a Monocular Camera using Quadtree Generating Networks - Archive ouverte HAL Access content directly
Conference Papers Year :

N-QGN: Navigation Map from a Monocular Camera using Quadtree Generating Networks

Abstract

Monocular depth estimation has been a popular area of research for several years, especially since selfsupervised networks have shown increasingly good results in bridging the gap with supervised and stereo methods. However, these approaches focus their interest on dense 3D reconstruction and sometimes on tiny details that are superfluous for autonomous navigation. In this paper, we propose to address this issue by estimating the navigation map under a quadtree representation. The objective is to create an adaptive depth map prediction that only extract details that are essential for the obstacle avoidance. Other 3D space which leaves large room for navigation will be provided with approximate distance. Experiment on KITTI dataset shows that our method can significantly reduce the number of output information without major loss of accuracy.
Fichier principal
Vignette du fichier
ICRA_2022-4.pdf (3.67 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03588304 , version 1 (24-02-2022)

Identifiers

  • HAL Id : hal-03588304 , version 1

Cite

Daniel Braun, Olivier Morel, Pascal Vasseur, Cédric Demonceaux. N-QGN: Navigation Map from a Monocular Camera using Quadtree Generating Networks. IEEE International Conference on Robotics and Automation, ICRA'22, May 2022, Philadelphia, United States. ⟨hal-03588304⟩
131 View
33 Download

Share

Gmail Facebook Twitter LinkedIn More