Modeling Visual Context is Key to Augmenting Object Detection Datasets - Archive ouverte HAL
Communication Dans Un Congrès Année : 2018

Modeling Visual Context is Key to Augmenting Object Detection Datasets

Résumé

Performing data augmentation for learning deep neural networks is well known to be important for training visual recognition systems. By artificially increasing the number of training examples, it helps reducing overfitting and improves generalization. For object detection, classical approaches for data augmentation consist of generating images obtained by basic geometrical transformations and color changes of original training images. In this work, we go one step further and leverage segmentation annotations to increase the number of object instances present on training data. For this approach to be successful, we show that modeling appropriately the visual context surrounding objects is crucial to place them in the right environment. Otherwise, we show that the previous strategy actually hurts. With our context model, we achieve significant mean average precision improvements when few labeled examples are available on the VOC’12 benchmark.
Fichier principal
Vignette du fichier
eccv2018submission.pdf (2.64 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01844474 , version 1 (19-07-2018)

Identifiants

Citer

Nikita Dvornik, Julien Mairal, Cordelia Schmid. Modeling Visual Context is Key to Augmenting Object Detection Datasets. ECCV 2018 - European Conference on Computer Vision, Sep 2018, Munich, Germany. pp.375-391, ⟨10.1007/978-3-030-01258-8_23⟩. ⟨hal-01844474⟩
370 Consultations
449 Téléchargements

Altmetric

Partager

More