Local Temporal Pattern and Data Augmentation for Micro-Expression Spotting
Résumé
Micro-expressions (MEs) are very important nonverbal communication clues. However, due to their local and short nature, spotting them is challenging. In this article, we address this problem by using a dedicated local and temporal pattern (LTP) of facial movement. This pattern has a specific shape (an S-pattern) when MEs are displayed. Thus, by using a classic classification algorithm (SVM), MEs can be distinguished from other facial movements. We also propose a global final fusion analysis covering the whole face to improve the distinction between ME (local) and head (global) movements. However, the learning of S-patterns is limited by the small number of ME databases and the low volume of ME samples. Hammerstein models (HMs) are known to effectively approximate muscle movements. By approximating each S-pattern with an HM, we can both filter out outliers and generate new similar S-patterns. In this way, we augment the dataset for S-pattern training and improve the ability to differentiate MEs from other movements. The spotting results, performed in the CASMEI and CASMEII databases, show that our proposed LTP outperforms the most popular spotting method in terms of the F1-score. Adding a fusion process and data augmentation improves the spotting performance even further.