Efficient Codebook and Factorization for Second Order Representation Learning
Résumé
Learning rich and compact representations is an open topic in many
fields such as object recognition or image retrieval. Deep neural networks
have made a major breakthrough during the last few years
for these tasks but their representations are not necessary as rich
as needed nor as compact as expected. To build richer representations,
high order statistics have been exploited and have shown excellent
performances, but at the cost of higher dimensional features.
While this drawback has been partially addressed with factorization
schemes, the original compactness of first order models has never
been retrieved, or at a heavy loss in performances. Our method,
by jointly integrating codebook strategy to factorization scheme, is
able to produce compact representations while keeping the second
order performances with few additional parameters. This formulation
leads to state-of-the-art results on three image retrieval datasets.