Multi-domain encoder–decoder neural networks for latent data assimilation in dynamical systems
Abstract
High-dimensional dynamical systems often require computationally intensive physics-based simulations, making full physical space data assimilation impractical. Latent data assimilation methods perform assimilation in reduced-order latent space for efficiency but struggle with complex, nonlinear state-observation mappings. Recent solutions like Generalized Latent Data Assimilation (GLA) and Latent Space Data Assimilation (LSDA) address heterogeneous latent spaces by incorporating surrogate mapping functions but introduce computational costs and uncertainties. Furthermore, current algorithms that integrate data assimilation and deep learning still face limitations when it comes to handling non-explicit mapping functions. To address these challenges, this paper introduces a novel deep-learning-based data assimilation scheme, named Multi-domain Encoder–Decoder Latent Data Assimilation (MEDLA), capable of handling diverse data sources by sharing a common latent space. The proposed approach significantly reduces the computational burden since the complex mapping functions are mimicked by the multi-domain encoder–decoder neural network. It also enhances assimilation accuracy by minimizing interpolation and approximation errors. Extensive numerical experiments from three different test cases assess MEDLA’s performance in high dimensional dynamical systems, benchmarking it against state-of-the-art latent data assimilation methods. The numerical results consistently underscore MEDLA’s superiority in managing multi-scale observational data and tackling intricate, non-explicit mapping functions.
Origin | Files produced by the author(s) |
---|