Hair Color Digitization through Imaging and Deep Inverse Graphics
Résumé
Hair appearance is a complex phenomenon due to hair geometry and how the light bounces on different hair fibers.
For this reason, reproducing a specific hair color in a rendering environment is a challenging task that requires manual work and expert knowledge in computer graphics to tune the result visually.
While current hair capture methods focus on hair shape estimation
many applications could benefit from an automated method for capturing the appearance of a physical hair sample, from augmented/virtual reality to hair dying development.
Building on recent advances in inverse graphics and material capture using deep neural networks, we introduce a novel method for hair color digitization.
Our proposed pipeline allows capturing the color appearance of a physical hair sample and renders synthetic images of hair with a similar appearance, simulating different hair styles and/or lighting environments.
Since rendering realistic hair images requires path-tracing rendering, the conventional inverse graphics approach based on differentiable rendering is untractable.
Our method is based on the combination of a controlled imaging device, a path-tracing renderer, and an inverse graphics model based on self-supervised machine learning, which does not require to use differentiable rendering to be trained.
We illustrate the performance of our hair digitization method on both real and synthetic images and show that our approach can accurately capture and render hair color.
Origine | Fichiers produits par l'(les) auteur(s) |
---|