MGRFormer: A Multimodal Transformer Approach for Surgical Gesture Recognition
Résumé
Automatic surgical gesture recognition has the potential to revolutionize the field of surgery by enhancing patient care, surgical training, and our understanding of surgical skills. By integrating kinematic data, which precisely captures hand movements, with video data for contextual understanding, multimodal machine learning can greatly enhance the accuracy of surgical gesture recognition systems by capturing complementary knowledge. Recent research has highlighted the capabilities of Transformer-based models for temporal action segmentation. A key component of these models is the iterative refinement module, which enhances predictions using contextual data. In this study, we propose MGRFormer, a novel multimodal framework that leverages the interaction between kinematics and visual data at the refinement stage for the task of surgical gesture recognition. We evaluated our MGRFormer on the VTS dataset, and the results demonstrated that our approach outperformed unimodal and multimodal state-of-the-art methods by a large margin.