Correspondence Learning via Linearly-invariant Embedding - Archive ouverte HAL
Communication Dans Un Congrès Année : 2020

Correspondence Learning via Linearly-invariant Embedding

Résumé

In this paper, we propose a fully differentiable pipeline for estimating accurate dense correspondences between 3D point clouds. The proposed pipeline is an extension and a generalization of the functional maps framework. However, instead of using the Laplace-Beltrami eigenfunctions as done in virtually all previous works in this domain, we demonstrate that learning the basis from data can both improve robustness and lead to better accuracy in challenging settings. We interpret the basis as a learned embedding into a higher dimensional space. Following the functional map paradigm the optimal transformation in this embedding space must be linear and we propose a separate architecture aimed at estimating the transformation by learning optimal descriptor functions. This leads to the first end-to-end trainable functional map-based correspondence approach in which both the basis and the descriptors are learned from data. Interestingly, we also observe that learning a \emph{canonical} embedding leads to worse results, suggesting that leaving an extra linear degree of freedom to the embedding network gives it more robustness, thereby also shedding light onto the success of previous methods. Finally, we demonstrate that our approach achieves state-of-the-art results in challenging non-rigid 3D point cloud correspondence applications.

Dates et versions

hal-03046448 , version 1 (08-12-2020)

Identifiants

Citer

Riccardo Marin, Marie-Julie Rakotosaona, Simone Melzi, Maks Ovsjanikov. Correspondence Learning via Linearly-invariant Embedding. NeurIPS, Dec 2020, Virtual Conference, France. ⟨hal-03046448⟩
33 Consultations
0 Téléchargements

Altmetric

Partager

More