Building a Statistical AU Space for Facial Expression Recognition in 3D
Résumé
A commonly accepted postulate is that facial expression recognition (FER) can be carried out by interpretation of facial action units (AUs) through high-level decision making rules. Meanwhile, most studies on AU-based FER simply detect AUs and do not map their AU detection results into expressions. In this paper, we propose to build a statistical AU space for the purpose of AU interpretation. Similarity scores from the previously proposed statistical feature models are used for defining the coordinate of an expression displayed on the facial scan. These scores are further fed to a SVM classifier to interpret expression into one of the six universal emotions. The preliminary results demonstrate the potential effectiveness of applying AU space for FER through AU interpretation.