Generalized Rectifier Wavelet Covariance Models For Texture Synthesis - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

Generalized Rectifier Wavelet Covariance Models For Texture Synthesis

Résumé

State-of-the-art maximum entropy models for texture synthesis are built from statistics relying on image representations defined by convolutional neural networks (CNN). Such representations capture rich structures in texture images, outperforming wavelet-based representations in this regard. However, conversely to neural networks, wavelets offer meaningful representations, as they are known to detect structures at multiple scales (e.g. edges) in images. In this work, we propose a family of statistics built upon non-linear wavelet based representations, that can be viewed as a particular instance of a one-layer CNN, using a generalized rectifier non-linearity. These statistics significantly improve the visual quality of previous classical wavelet-based models, and allow one to produce syntheses of similar quality to state-of-the-art models, on both gray-scale and color textures.

Dates et versions

hal-03612563 , version 1 (17-03-2022)

Identifiants

Citer

Antoine Brochard, Sixin Zhang, Stéphane Mallat. Generalized Rectifier Wavelet Covariance Models For Texture Synthesis. ICLR 2022 - 10th International Conference on Learning Representations, Apr 2022, Virtual, France. ⟨10.48550/arXiv.2203.07902⟩. ⟨hal-03612563⟩
119 Consultations
0 Téléchargements

Altmetric

Partager

More