No Reference 3D mesh quality assessment using deep convolutional features
Résumé
3D meshes have gained significant interest in com- puter vision community due to their use in several applications such as virtual reality, gaming, heritage preservation, etc. How- ever these 3D contents might be altered in the pre-processing steps like acquisition, compression or denoising. In this context, visual quality assessment algorithms can be used to quantify the amount of distorsions that affect a 3D mesh and hence degrade its visual rendering. We introduce a no-reference mesh quality assessment index based on deep convolutional features named DCFQI (Deep Convolutional Features Quality Index). Leveraging the power of deep learning, particularly transfer learning, allows the proposed approach to score visual quality without the need of reference content, hence emulating the human vision. By rendering a 3D mesh into 2D views and patches, a pre- trained convolutional neural network is used to automatically extract deep features from the latters. The obtained features are used in a Multi Layer Perceptron (MLP) to predict the objective quality score. Two learning strategies are presented and compared for blind quality estimation. Obtained results in terms of correlation with subjective human scores of quality demonstrate the superiority of the proposed index over existing methods.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|