A Spherical Robot-Centered Representation for Urban Navigation
Résumé
This paper describes a generic method for vision-based navigation in real urban environments. The proposed approach relies on a representation of the scene based on spherical images augmented with depth information and a spherical saliency map, both constructed in a learning phase. Saliency maps are built by analyzing useful information of points which best condition spherical projections constraints in the image. During navigation, an image-based registration technique combined with robust outlier rejection is used to precisely locate the vehicle. The main objective of this work is to improve computational time by better representing and selecting information from the reference sphere and current image without degrading matching. It will be shown that by using this pre-learned global spherical memory no error is accumulated along the trajectory and the vehicle can be precisely located without drift.