Consistent Multi-and Single-View HDR-Image Reconstruction from Single Exposures
Résumé
Recently, there have been attempts to obtain high-dynamic range (HDR) images from single exposures and efforts to reconstruct multi-view HDR images using multiple input exposures. However, there have not been any attempts to reconstruct multi-view HDR images from multi-view Single Exposures to the best of our knowledge. We present a two-step methodology to obtain color consistent multi-view HDR reconstructions from single-exposure multi-view low-dynamic-range (LDR) Images. We define a new combination of the Mean Absolute Error and Multi-Scale Structural Similarity Index loss functions to train a network to reconstruct an HDR image from an LDR one. Once trained we use this network to multi-view input. When tested on single images, the outputs achieve competitive results with the state-of-the-art. Quantitative and qualitative metrics applied to our results and to the state-of-the-art show that our HDR expansion is better than others while maintaining similar qualitative reconstruction results. We also demonstrate that applying this network on multi-view images ensures coherence throughout the generated grid of HDR images.
Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|