Virtual 3D city model as a priori information source of vehicle ego-localization system
Résumé
This paper aims at demonstrating the usefulness of integrating virtual 3D models in vehicle ego-localization systems. Usually, vehicle localization algorithms are based on multi-sensor data fusion. Global Navigation Satellite Systems GNSS, as Global Positioning System GPS, are used to provide measurements of the geographic location. Nevertheless, GNSS solutions suffer from signal attenuation and masking, multipath phenomena and lack of visibility, especially in urban areas. That leads to degradation or even a total loss of the positioning information and then unsatisfactory performances. Dead-reckoning and inertial sensors are then often added to back up GPS in case of inaccurate or unavailable measurements or if high frequency location estimation is required. However, the dead-reckoning localization may drift in the long term due to error accumulation. To back up GPS and compensate the drift of the dead reckoning sensors based localisation, two approaches integrating a virtual 3D model are proposed in sregistered with respect to the scene perceived by an on-board sensor. From the real/virtual scenes matching, the transformation (rotation and translation) between the real sensor and the virtual sensor (whose position and orientation are known) can be computed. These two approaches lead to determine the pose of the real sensor embedded on the vehicle. In the first approach, the considered perception sensor is a camera and in the second approach, it is a laser scanner. The first approach is based on image matching between the virtual image extracted from the 3D city model and the real image acquired by the camera. The two major parts are: 1/ Detection and matching of feature points in real and virtual images (three features points are compared: Harris corner detector, SIFT and SURF); 2/ Pose computation using POSIT algorithm. The second approach is based on the on-board horizontal laser scanner that provides a set of distances between it and the environment. This set of distances is matched with depth information (virtual laser scan data), provided by the virtual 3D city model. The pose estimation provided by these two approaches can be integrated in data fusion formalism. In this paper the result of the first approach is integrated in IMM UKF data fusion formalism. Experimental results obtained using real data illustrate the feasibility and the performances of the proposed approaches.