Templates for 3D Object Pose Estimation Revisited: Generalization to New Objects and Robustness to Occlusions - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Templates for 3D Object Pose Estimation Revisited: Generalization to New Objects and Robustness to Occlusions

Résumé

We present a method that can recognize new objects and estimate their 3D pose in RGB images even under partial occlusions. Our method requires neither a training phase on these objects nor real images depicting them, only their CAD models. It relies on a small set of training objects to learn local object representations, which allow us to locally match the input image to a set of "templates", rendered images of the CAD models for the new objects. In contrast with the state-of-the-art methods, the new objects on which our method is applied can be very different from the training objects. As a result, we are the first to show generalization without retraining on the LINEMOD and Occlusion-LINEMOD datasets. Our analysis of the failure modes of previous template-based approaches further confirms the benefits of local features for template matching. We outperform the state-of-the-art template matching methods on the LINEMOD, Occlusion-LINEMOD and T-LESS datasets. Our source code and data are publicly available at https://github.com/nv-nguyen/template-pose

Dates et versions

hal-03791820 , version 1 (29-09-2022)

Identifiants

Citer

van Nguyen Nguyen, Yinlin Hu, Yang Xiao, Mathieu Salzmann, Vincent Lepetit. Templates for 3D Object Pose Estimation Revisited: Generalization to New Objects and Robustness to Occlusions. International Conference on Computer Vision and Pattern Recognition, 2022, New Orleans, United States. ⟨hal-03791820⟩
15 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More