Limitations of Metric Loss for the Estimation of Joint Translation and Rotation
Abstract
Localizing objects is a key challenge for robotics, augmented reality and mixed reality applications. Images taken in the real world feature many objects with challenging factors such as occlusions, motion blur and changing lights. In manufacturing industry scenes, a large majority of objects are poorly textured or highly reflective. Moreover, they often present symmetries which makes the localization task even more complicated. PoseNet is a deep neural network on the T-LESS dataset. We demonstrate with our experiments that PoseNet is able to predict translation and rotation separately with high accuracy. However, our experiments also prove that it is not able to learn translation and rotation jointly. Indeed, one of the two modalities is either not learner by the network, or forgotten during training when the other is being learner. This justifies the fact that future works will require other formulation of the loss as well as other architectures in order to solve the pose estimation general problem.
Origin | Publisher files allowed on an open archive |
---|