DAM-SLAM: depth attention module in a semantic visual SLAM based on objects interaction for dynamic environments
Résumé
Nowadays, modern visual SLAM methods attempt to deal with dynamic environments by considering the non-rigid scene assumption. This well-established approach combines geometric and semantic information to detect dynamic objects to achieve accurate localization and mapping in real environments. However, these methods need more generalization and scene awareness because of their reasoning limits due to the labeling object strategy and the need for matched keypoints. Therefore, we propose a novel method called Depth Attention Module SLAM (DAM-SLAM) that overcomes the limitations of existing methods. The main idea is to take into account the depth influence used in the geometric and semantic modules through a depth-related adaptive threshold and impact factor. Moreover, a Bayesian filter is used to refine the keypoints state estimates using a motion probability update function based on a weighting strategy related to the keypoints area (in/out of segmented object’s masks). In addition, we designed a Depth Attention Module that allows generalization to other methods by considering the non-matched keypoints and the keypoints out of segmented regions. This module estimates these keypoints state without requiring any prior semantic information by determining the interactions between the objects. We estimate this interaction through the correlation between the proximity of depth and position of these keypoints with the dynamic keypoints in a specific zone of influence of dynamic objects. The obtained results demonstrate the efficacy of the proposed method in providing accurate localization and mapping in dynamic environments.