AI-powered autonomous mobility system assisting blind digital twin
Résumé
Background and Aims: Despite recent technological advances, the current mobility aids for blind continue to encounter major challenges, resulting in limited effectiveness. In particular, identifying an obstacle on the trajectory of a blind individual remains challenging. To overcome this hurdle, we aimed to leverage on AI-driven image processing, which has demonstrated success in robotic-assisted navigation, to develop an automated navigation AI system able to guide blind virtual subjects in a virtual environment. We propose (I) to create a virtual model of mobility by developing a 3D virtual environment encompassing procedurally generated maze, obstacles, and digital twins (DT) wearing a camera. We propose (II) to develop an AI navigation system processing in real-time camera images to guide the DT through the virtual maze. Finally, we aim (III) to evaluate the results of the AI-driven navigation system when performing automatic obstacle semantic segmentation on camera input, giving to each image pixel a label (obstacle or not). Materials and Methods: We developed a simulated environment for DT to explore procedurally generated mazes, using cameras. These cameras provide images of the visual surroundings in the form of semantic segmentation maps or basic views of the scene. We developed an AI system with reinforcement learning to process in real-time visual information and use it to guide DT navigation in the maze. The AI system was optimized to make the DT explore the maze and avoid collisions with walls and obstacles. Ten DT were trained to navigate and avoid obstacles using semantic segmentation maps (Group A) and were compared with 10 DT trained using simple views (Group B). Each DT was trained by exploring 500 virtual mazes. On each group, evaluation of the mobility performance was assessed on 100 previously unseen mazes, measuring occurrence of collisions and preferred navigation speed. Results: Group A (semantic segmentation) and Group B (basic camera view) performed with 94% and 56% of trajectories without collision, respectively. The average number of collisions per subject, in Groups A and B, was 0.09 and 0.66, respectively. The average preferred speed of Groups A and B was 6.1 and 5.2 km/h, respectively. Conclusions: AI-driven systems trained through the virtual environment allowed the DT to avoid the obstacles using a simple camera. Furthermore, our findings collectively suggest that the application of semantic segmentation improves navigation, as evidenced by both collision avoidance and preferred speed metrics. Further studies are needed to explore translation of these results to real-world applications for blind people. In particular, semantic segmentation can be used not only to guide a blind patient using AI, as well as to extract and communicate the position and relative distance of obstacles in a patient's trajectory.