Multi-view 3D surface reconstruction from SAR images by inverse rendering
Résumé
3D reconstruction of a scene from Synthetic Aperture Radar (SAR) images mainly relies on interferometric measurements, which involve strict constraints on the acquisition process. These last years, progress in deep learning has significantly advanced 3D reconstruction from multiple views in optical imaging, mainly through reconstruction-by-synthesis approaches popularized by Neural Radiance Fields. In this paper, we propose a new inverse rendering method for 3D reconstruction from a few incoherent SAR views, drawing inspiration from optical approaches. First, we introduce a new simplified differentiable SAR rendering model, able to synthetize images from a Digital Surface Model (DSM) and a radar backscattering coefficients map. Then, we introduce a coarse-to-fine strategy to reconstruct the DSM and the map of backscattering coefficients of a SAR scene starting only from a few SAR views. We use a neural field, i.e. a continuous parametric model based on a Multi-Layer Perceptron, to represent the SAR scene. Finally, we present preliminary results of DSM reconstruction from synthetic SAR images produced by ONERA's physically-based EMPRISE simulator, supporting the potential of applying inverse rendering approaches to SAR data in order to efficiently exploit geometric disparities in future applications such as multi-sensor data fusion.
| Origine | Fichiers produits par l'(les) auteur(s) |
|---|---|
| Licence |