Could the bubbleview metaphor be used to infer visual attention on 3D graphical content ?
Résumé
Understanding the deployment of human gaze on 3D graphical objects is of critical importance in order to propose rich and complex 3D environments without strong latency nor rendering constraints. However, the data needed to study this gaze deployment can be costly and difficult to obtain, especially in the context of the Covid-19 pandemic where in-lab experiments are strongly discouraged. In order to alleviate these issues, we propose to use the BubbleView metaphor as a way of crowdsourcing visual attention data on 3D graphical content. In this paper, we question the adequacy of this method to provide a reliable proxy for visual attention in the context of 3D graphical objects. Moreover, we show that data obtained in this manner can be used to train visual saliency models, with only a slight tradeoff in performances compared to the use of ground-truth eye-tracking data.
Origine | Fichiers produits par l'(les) auteur(s) |
---|