Linear inverse problems with nonnegativity constraints through divergences: sparsity of optimisers
Résumé
We pass to continuum in optimisation problems associated to linear inverse problems $y = Ax$ with non-negativity constraint $x \geq 0$. We focus on the case where the noise model leads to maximum likelihood estimation through the so-called $\beta$-divergences, which cover several of the most common noise statistics such as Gaussian, Poisson and multiplicative Gamma. Considering~$x$ as a Radon measure over the domain on which the reconstruction is taking place, we show a general sparsity result. In the high noise regime corresponding to $y\notin \{{Ax}\mid{x \geq 0}\}$, optimisers are typically sparse in the form of sums of Dirac measures. We hence provide an explanation as to why any possible algorithm successfully solving the optimisation problem will lead to undesirably spiky-looking images when the image resolution gets finer, a phenomenon well documented in the literature. We illustrate these results with several numerical examples inspired by medical imaging.