Analysis of the Recent AI for Pedestrian Navigation With Wearable Inertial Sensors
Résumé
Wearable devices embedding inertial sensors enable autonomous, seamless, and low-cost pedestrian navigation. Appealing it is, the approach faces several challenges: measurement noises, different device-carrying modes, different user dynamics, and individual walking characteristics. Recent research applies Artificial Intelligence (AI) to improve inertial navigation's robustness and accuracy. Our analysis identifies 2 main categories of AI approaches depending on the inertial signals segmentation: either using human gait events (steps or strides) or fixed-length inertial data segments. A theoretical analysis of the fundamental assumptions is carried out for each category. Two state-of-the-art AI algorithms (SELDA, RoNIN), representative of each category, and a gait-driven non-AI method (SmartWalk) are evaluated in a 2,17 km long open access dataset, representative of the diversity of pedestrians' mobility surroundings (open-sky, indoors, forest, urban, parking lot). SELDA is an AI-based stride length estimation algorithm, RoNIN is an AI-based positioning method, and SmartWalk is a gait-driven non-AI positioning method. The experimental assessment shows the distinct features in each category and their limits with respect to the underlying hypotheses. On average, SELDA, RoNIN, and SmartWalk achieve 8 m, 22 m, and 17 m average positioning errors (RMSE) respectively, on six testing tracks recorded with two volunteers in various environments.
Origine | Fichiers produits par l'(les) auteur(s) |
---|