Sub‐Pixel Displacement Estimation With Deep Learning: Application to Optical Satellite Images Containing Sharp Displacements
Résumé
Optical image correlation is a powerful method for remotely constraining ground movement from optical satellite imagery related to natural disasters (e.g., earthquakes, volcanoes, landslides). This approach enables the characterization, and identification of the causal factors and mechanisms underlying such processes. By employing sub-pixel correlation algorithms, one can obtain highly accurate (m-to-cm level) displacement fields at high spatial resolution (dm-to-cm) by comparing satellite images acquired before and after a period of movement. However, this method generally assumes a homogeneous translation of all pixels within a given correlation window, which will lead to biased estimates of ground displacement if the real case is not well represented by such a simplification, especially when resolving ground displacements next to sharp gradients in displacement, such as those found in the near-field of earthquake surface ruptures. In this paper, we present an innovative deep learning method estimating sub-pixel displacement maps from optical satellite images for the retrieval of ground displacement. From the generation of a realistic simulated database, comprising Landsat-8 satellite image pairs containing simulated sub-pixel shifts and sharp discontinuities, we develop a Convolutional Neural Network able to retrieve sub-pixel displacements. The comparison to state-of-the-art correlation methods shows that our pipeline significantly reduces by 32% the estimation bias around fault ruptures, leading to more accurate characterization of the near-field strain in surface rupturing earthquakes. Application of our model to the 2019 Ridgecrest earthquake demonstrates the ability of our model to accurately and quickly resolve ground displacement using real satellite images. Code is made available at https://gricad-gitlab.univ-grenoblealpes.fr/montagtr/cnn4l-discontinuities.
The precise estimation of ground displacement caused by natural hazards, such as earthquakes, volcanoes, landslides, as well as monitoring of glaciers, can be performed by comparing two optical satellite images of the same region acquired on different dates. The challenge resides in the fact that the ground motion is generally smaller than the satellite image resolution: sub-pixel precision is therefore critical. One solution, at the core of current optical correlation methods, is to assume a uniform displacement over a small window (typically between 3 and 100 pixels wide/high). However, this assumption can lead to wrong estimations, notably close to a sharp discontinuity such as a fault rupture. We present here the first data-based method to perform ground displacement estimation, relying on a machine learning model and a synthetically generated database. This database is used to train a model to retrieve the local displacement for a given image pair. It includes images containing synthetic sharp displacement boundaries in order to learn a more realistic machine learning model. Our results shows that we improve the accuracy near fault ruptures compared to state-of-the-art methods, which is important for studying the mechanics of near-fault processes.
Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|