Understanding deep features with computer-generated imagery - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2015

Understanding deep features with computer-generated imagery

Résumé

We introduce an approach for analyzing the variation of features generated by convolutional neural networks (CNNs) with respect to scene factors that occur in natural images. Such factors may include object style, 3D viewpoint, color, and scene lighting configuration. Our approach analyzes CNN feature responses corresponding to different scene factors by controlling for them via rendering using a large database of 3D CAD models. The rendered images are presented to a trained CNN and responses for different layers are studied with respect to the input scene factors. We perform a decomposition of the responses based on knowledge of the input scene factors and analyze the resulting components. In particular, we quantify their relative importance in the CNN responses and visualize them using principal component analysis. We show qualitative and quantitative results of our study on three CNNs trained on large image datasets: AlexNet [18], Places [40], and Oxford VGG [8]. We observe important differences across the networks and CNN layers for different scene factors and object categories. Finally, we demonstrate that our analysis based on computer-generated imagery translates to the network representation of natural images.
Fichier principal
Vignette du fichier
understanding_deep_features_with_CG.pdf (2.27 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01240849 , version 1 (12-12-2015)

Identifiants

  • HAL Id : hal-01240849 , version 1

Citer

Mathieu Aubry, Bryan Russell. Understanding deep features with computer-generated imagery. ICCV, Dec 2015, Santiago, Chile. ⟨hal-01240849⟩
127 Consultations
189 Téléchargements

Partager

Gmail Facebook X LinkedIn More