Rebalancing gradient to improve self-supervised co-training of depth, odometry and optical flow predictions
Résumé
We present CoopNet, an approach that improves the cooperation of co-trained networks by dynamically adapting the apportionment of gradient, to ensure equitable learning progress. It is applied to motion-aware self-supervised prediction of depth maps, by introducing a new hybrid loss, based on a distribution model of photo-metric reconstruction errors made by, on the one hand the depth + odometry paired networks, and on the other hand the optical flow network. This model essentially assumes that the pixels from moving objects (that must be discarded for training depth and odometry), correspond to those where the two reconstructions strongly disagree. We justify this model by theoretical considerations and experimental evidences. A comparative evaluation on KITTI and CityScapes datasets shows that CoopNet improves or is comparable to the state-of-the-art in depth, odometry and optical flow predictions. Our code is available here: https://github.com/mhariat/CoopNet.
Domaines
Informatique [cs]
Fichier principal
wacv23.pdf (2.32 Mo)
Télécharger le fichier
wacv23 (1).pdf (2.32 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|