Limitations of Metric Loss for the Estimation of Joint Translation and Rotation - Archive ouverte HAL
Conference Papers Computer vision and Computer Graphics, revised selected papers of visigrapp'07 Year : 2019

Limitations of Metric Loss for the Estimation of Joint Translation and Rotation

Abstract

Localizing objects is a key challenge for robotics, augmented reality and mixed reality applications. Images taken in the real world feature many objects with challenging factors such as occlusions, motion blur and changing lights. In manufacturing industry scenes, a large majority of objects are poorly textured or highly reflective. Moreover, they often present symmetries which makes the localization task even more complicated. PoseNet is a deep neural network on the T-LESS dataset. We demonstrate with our experiments that PoseNet is able to predict translation and rotation separately with high accuracy. However, our experiments also prove that it is not able to learn translation and rotation jointly. Indeed, one of the two modalities is either not learner by the network, or forgotten during training when the other is being learner. This justifies the fact that future works will require other formulation of the loss as well as other architectures in order to solve the pose estimation general problem.
Fichier principal
Vignette du fichier
75250.pdf (2.82 Mo) Télécharger le fichier
Origin Publisher files allowed on an open archive

Dates and versions

hal-03324093 , version 1 (16-09-2021)

Licence

Identifiers

Cite

Philippe Pérez de San Roman, Pascal Desbarats, Jean-Philippe Domenger, Axel Buendia. Limitations of Metric Loss for the Estimation of Joint Translation and Rotation. VISIGRAPP 2019. 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theroy and Applications, Feb 2019, Prague, Czech Republic. pp.590-597, ⟨10.5220/0007525005900597⟩. ⟨hal-03324093⟩
92 View
116 Download

Altmetric

Share

More