Fully automatic extrinsic calibration of RGB-D system using two views of natural scene
Résumé
RGB-D sensor, like low cost Kinect, are widely used in robotics applications. Obstacle Avoidance (OA), Simultaneous Localization And Mapping (SLAM), Mobile Object Tracking (MOT) all are needing accurate information about the position of objects in the environment. 3D cameras are really convenient to realize those tasks but as low cost sensors, they have to be completed by other sensors:cameras, laser range finders, US or IR telemeters. In order to exploit all data sensors in a same algorithm, we have to express these data in a common reference frame. In other words we have to know the rigid transformation between sensor frames. In this paper, we propose a new method to retrieve rigid transformation (known as extrinsic parameters in calibration process) between a depth camera and a conventional camera. We show that such a method is accurate enough without the need of an user interaction nor a special calibration pattern unlike other common calibration processes.