Style-based film grain analysis and synthesis
Résumé
Film grain which used to be a by-product of the chemical processing in the analog film stock, is a desirable feature in the era of digital cameras. Besides participating to the artistic intent during content creation, film grain has also interesting properties in the video compression chain such as its ability to mask compression artifacts. In this paper, we use a deep learning-based framework for film grain analysis, generation and synthesis. Our framework consists of three modules: a style encoder performing film grain style analysis, a mapping network responsible for film grain style generation, and a synthesis network that generates and blends a specific grain style to a given content in a content-adaptive manner. All modules are trained jointly, thanks to dedicated loss functions, on a new large and diverse dataset of pairs of grain-free and grainy images that we made publicly available to the community1. Quantitative and qualitative evaluations show that fidelity to the reference grain, diversity of grain styles as well as a perceptually pleasant grain synthesis are achieved, demonstrating that each module outperforms the state-of-the-art in the task it was designed for.