Prediction of User Request and Complaint in Spoken Customer-Agent Conversations
Résumé
We present the corpus called HealthCall. This was recorded in real-life conditions in the call center of Malakoff Humanis. It includes two separate audio channels, the first one for the customer and the second one for the agent. Each conversation was anonymized respecting the General Data Protection Regulation. This corpus includes a transcription of the spoken conversations and was divided into two sets: Train and Devel sets. Two important customer relationship management tasks were assessed on the HealthCall corpus: Automatic prediction of type of user requests and complaints detection. For this purpose, we have investigated 14 feature sets: 6 linguistic feature sets, 6 audio feature sets and 2 vocal interaction feature sets. We have used Bidirectional Encoder Representation from Transformers models for the linguistic features, openSMILE and Wav2Vec 2.0 for the audio features. The vocal interaction feature sets were designed and developed from Turn Takings. The results show that the linguistic features always give the best results (91.2% for the Request task and 70.3% for the Complaint task). The Wav2Vec 2.0 features seem more suitable for these two tasks than the ComPaRe16 features. Vocal interaction features outperformed ComPaRe16 features on Complaint task with a 57% rate achieved with only six features.