A visual attention model for stereoscopic 3D images using monocular cues
Résumé
2D Visual saliency has been widely explored for decades. Several comprehensive and well performing models have been proposed, but they are not totally adapted to stereoscopic 3D content. To date only few tentatives of 3D saliency prediction can be found in the literature and most of them rely on binocular depth/disparity. The latter information cannot be correctly obtained in the case of asymmetric processing of the stereo-pair, exploiting the phenomenon of binocular suppression. Based on this aspect, we propose in this paper, a new saliency model for stereoscopic 3D images. The proposed model considers two features: (1) spatial feature based on the characteristics of interest points and (2) depth feature based on monocular cues. The latter feature is adapted to asymmetric content and uses occlusions for predicting depth order of the image objects. A tunable fusion strategy is proposed in order to take advantage of different modalities of combining conspicuity maps. For the needs of performance evaluation, an eye-tracking database is created using stereo-pairs with different content. The proposed model gives very good performance in comparison to the literature. The results show that the use of monocular cues outperforms the use of disparity.