Human gesture segmentation based on change point model for efficient gesture interface
Résumé
In order to control naturally robots and artificial agents, gestures is known to be an effective modality. Unfortunately, gestures based interactions are still heavy to use: demanding specific training sessions, clear separations between gestures, namely, specifying pre- and post-strokes as well as the gesture used to perform the command. These constraints are related to a common difficulty: time series segmentation. Indeed, clustering human motions into meaningful segments or isolating meaningful segments forming a continuous movement flow present the same problem: how to find the post and the pre strokes. For machine learning, this problem is solved by having training sets of carefully labelled data. Good segmentation improves the quality of the gesture recognition-based interface. In our contribution, we focus on developing a non-parametric stochastic segmentation algorithm. Once the segmentation has been validated, we show how any novice user can create in a semi-supervised way, his or her, own gestures library. In addition, we show how the obtained system is efficient in finding meaningful gestures (the once learned earlier) within continuous movements flow. Thus removing the constraint of performing manual specification of respectively the beginning of the movement and its end. The proposed technique is assessed through a real-life example, where a novice user creates an ad-hoc interface to control a robot in a natural way.