Predicting cnn learning accuracy using chaos measurement
Résumé
Learning on entropy coded data has many benefits. First, it avoids decoding, but also it allows one to process compact data. However, this type of learning has been overlooked due to the chaos introduced by entropy coding functions. Indeed, convolution widely used in learning algorithm requires that the encoding function preserves the distance between pixel positions (spatial closeness), and the distance between pixel values (semantic closeness). Even if these two properties are not satisfied by entropy coding, we had shown previously that learning on entropy coded data is possible and that the accuracy depends on spatial and semantic closeness. In this paper, we quantify this dependence, and introduce a new metric, that measures the chaos in the data representation. This measure is easy to compute as it depends on the encoded data only. Moreover, this measures allows one to predict the accuracy of the learning algorithm, that process entropy coded data.
Origine | Fichiers produits par l'(les) auteur(s) |
---|