MADNESS Deblender: Maximum A posteriori with Deep NEural networks for Source Separation
Résumé
Due to the unprecedented depth of the upcoming ground-based Legacy Survey of Space and Time (LSST) at the Vera C. Rubin Observatory, approximately two-thirds of the galaxies are likely to be affected by blending - the overlap of physically separated galaxies in images. Thus, extracting reliable shapes and photometry from individual objects will be limited by our ability to correct blending and control any residual systematic effect. Deblending algorithms tackle this issue by reconstructing the isolated components from a blended scene, but the most commonly used algorithms often fail to model complex realistic galaxy morphologies. As part of an effort to address this major challenge, we present MADNESS, which takes a data-driven approach and combines pixel-level multi-band information to learn complex priors for obtaining the maximum a posteriori solution of deblending. MADNESS is based on deep neural network architectures such as variational auto-encoders and normalizing flows. The variational auto-encoder reduces the high-dimensional pixel space into a lower-dimensional space, while the normalizing flow models a data-driven prior in this latent space. Using a simulated test dataset with galaxy models for a 10-year LSST survey and a galaxy density ranging from 48 to 80 galaxies per arcmin2 we characterize the aperture-photometry g-r color, structural similarity index, and pixel cosine similarity of the galaxies reconstructed by MADNESS. We compare our results against state-of-the-art deblenders including scarlet. With the r-band of LSST as an example, we show that MADNESS performs better than in all the metrics. For instance, the average absolute value of relative flux residual in the r-band for MADNESS is approximately 29% lower than that of scarlet. The code is publicly available on GitHub.