Volumetric Multi-View Rendering
Résumé
Rendering photo-realistic images using Monte Carlo path tracing often requires sampling a large number of paths to reach acceptable levels of noise. This is particularly the case when rendering participating media, that complexify light paths with multiple scattering events. Our goal is to accelerate the rendering of heterogeneous participating media by exploiting redundancy across views, for instance when rendering animated camera paths, motion blur in consecutive frames or multi-view images such as lenticular or light-field images. This poses a challenge as existing methods for sharing light paths across views cannot handle heterogeneous participating media and classical estimators are not optimal in this context. We address these issues by proposing three key ideas. First, we propose new volume shift mappings to transform light paths from one view to another within the recently introduced null-scattering framework, taking into account changes in density along the transformed path. Second, we generate a shared path suffix that best contributes to a subset of views, thus effectively reducing variance. Third, we introduce the multiple weighted importance sampling estimator that benefits from multiple importance sampling for combining sampling strategies, and from weighted importance sampling for reducing the variance due to non contributing strategies. We observed significant reuse when views largely overlap, with no visible bias and reduced variance compared to regular path tracing at equal time. Our method further readily integrates into existing volumetric path tracing pipelines.
Fichier principal
volmvpt2022.pdf (51.57 Mo)
Télécharger le fichier
volmvpt2022-sup.pdf (971.71 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Origine | Fichiers produits par l'(les) auteur(s) |
---|