Can Generalised Divergences Help for Invariant Neural Networks?
Résumé
We consider a framework including multiple augmentation regularisation by generalised divergences to induce invariance for nongroup transformations during training of convolutional neural networks. Experiments on supervised classification of images at different scales not considered during training illustrate that our proposed method performs better than classical data augmentation.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|