Depth-Adapted CNN for RGB-D cameras - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Depth-Adapted CNN for RGB-D cameras

Résumé

Conventional 2D Convolutional Neural Networks (CNN) extract features from an input image by applying linear filters. These filters compute the spatial coherence by weighting the photometric information on a fixed neighborhood without taking into account the geometric information. We tackle the problem of improving the classical RGB CNN methods by using the depth information provided by the RGB-D cameras. State-of-the-art approaches use depth as an additional channel or image (HHA) or pass from 2D CNN to 3D CNN. This paper proposes a novel and generic procedure to articulate both photometric and geometric information in CNN architecture. The depth data is represented as a 2D offset to adapt spatial sampling locations. The new model presented is invariant to scale and rotation around the X and the Y axis of the camera coordinate system. Moreover, when depth data is constant, our model is equivalent to a regular CNN. Experiments of benchmarks validate the effectiveness of our model.
Fichier principal
Vignette du fichier
ACCV2020_ZACN__Camera_ready_.pdf (1.78 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02946902 , version 1 (23-09-2020)

Identifiants

  • HAL Id : hal-02946902 , version 1

Citer

Zongwei Wu, Guillaume Allibert, Christophe Stolz, Cédric Demonceaux. Depth-Adapted CNN for RGB-D cameras. 15th Asian Conference on Computer Vision (Oral presentation), Nov 2020, Kyoto (Virtual conference), Japan. ⟨hal-02946902⟩
140 Consultations
289 Téléchargements

Partager

Gmail Facebook X LinkedIn More