Fine scale image registration in large-scale urban LIDAR point sets
Résumé
Urban scenes acquisition is very often performed using laser scanners onboard a vehicle. In parallel, color information is also acquired through a set of coarsely aligned camera pictures. The question of combining both measures naturally arises for adding color to the 3D points or enhancing the geometry, but it faces important challenges. Indeed, 3D geometry acquisition is highly accurate while the images suffer from distortion and are only coarsely registered to the geometry. In this paper, we introduce a two-step method to register images to large-scale complex point clouds. Our method performs the image-to-geometry registration by iteratively registering the real image to a synthetic image obtained from the estimated camera pose and the point cloud, using either reflectance or normal information. First a coarse registration is performed by generating a wide-angle synthetic image and considering that small pitch and yaw rotations can be estimated as translations in the image plane. Then a fine registration is performed using a new image metric which is adapted to the difference of modality between the real and synthetic images. This new image metric is more resilient to missing data and large transformations than standard Mutual Information. In the process, we also introduce a method to generate synthetic images from a 3D point cloud that is adapted to large-scale urban scenes with occlusions and sparse areas. The efficiency of our algorithm is demonstrated both qualitatively and quantitatively on datasets of urban scans and associated images.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...