Fusion of interest point/image based descriptors for efficient person re-identification
Résumé
The paper proposes a novel video-based person re-identification system that consists of describing a person using both Interest Points (IP) and Image-based features. The Image-based descriptor extracts global image representation that includes the silhouette but also possibly extra objects (i.e animal, stroller, etc) while the IP-based descriptor extracts salient points associated each with a local region of one of the objects. Two re-identification systems are proposed: an IP-based system using SURF interest points matched via sparse representation, and Image-based system using a Convolutional Neural Network. To harness both representations, we propose a fusing strategy based on the scores product rule, the scores being vote vectors associated with each descriptor for each person. Our proposal is evaluated on the large public dataset PRID-2011 and the results show its effectiveness compared to the state of the art