Hybrid Visual and Inertial Position and Orientation Estimation based on Known Urban 3D Models - Archive ouverte HAL
Communication Dans Un Congrès Année : 2016

Hybrid Visual and Inertial Position and Orientation Estimation based on Known Urban 3D Models

Résumé

More and more pedestrians own devices (as a smartphone) that integrate a wide array of low-cost sensors (camera, IMU, magnetometer and GNSS receiver). GNSS is usually used for pedestrian localization in urban environment, but signal suffers from an inaccuracy of several meters. In order to have a more accurate localization and improve pedestrian navigation and urban mobility, we present a method for cityscale localization with a handheld device. Our central idea is to estimate the 3D location and 3D orientation of the camera based on the knowledge of the street furniture, which have a high repeatability and a large coverage area in the city. Firstly, the use of inertial measurements acquired with an IMU in the vision based method allows to accelerate the calculation of the position and orientation. Then, it provides a localization and an orientation as close as the vision based method with manual points selection, and will certainly better than an automatic detection and points selection. Performances are presented in terms of accuracy of positionning. The final aim is to have with our method a precision good enough to be able to propose in future works a on site display in augmented reality.
Fichier non déposé

Dates et versions

hal-01451468 , version 1 (01-02-2017)

Identifiants

Citer

Nicolas Antigny, Myriam Servières, Valérie Renaudin. Hybrid Visual and Inertial Position and Orientation Estimation based on Known Urban 3D Models. IPIN 2016, International conference on Indoor Positioning and Indoor Navigation, Oct 2016, MADRID, Spain. pp.4-7, ⟨10.1109/IPIN.2016.7743619⟩. ⟨hal-01451468⟩
153 Consultations
0 Téléchargements

Altmetric

Partager

More