Gesture Recognition Based on the Fusion of Hand Positioning and Arm Gestures
Résumé
To improve the link between operators and equipment, communication systems have begun using natural (user-oriented) languages such as speech and gestures. Our goal is to present gesture recognition based on the fusion of measurements from different sources. Sensors must be able to capture at least the location and orientation of the hand, as is done by Dataglove and a video camera. Datagloge gives the hand position and the video camera gives the general arm gesture representing the gesture's physical and spatial properties based on the two-dimensional (2D) skeleton representation of the arm. Measurement is partly complementary and partly redundant. The application is distributed over intelligent co-operating sensors. We detail the measurement of hand positioning and arm gestures, fusion processes, and implementation.