A Factorial Study of Neural Network Learning from Differences for Regression
Résumé
For regression tasks, using neural networks in a supervised way typically requires to repeatedly (over several iterations called epochs) present a set of items described by a number of features and the expected value to the network, so that it can learn to predict those values from those features. Inspired by case-based reasoning, several previous studies have made the hypothesis that there could be some advantages in training such neural networks on differences between sets of features, to predict differences between values. To test such a hypothesis, we applied a systematic factorial study on seven datasets and variants of datasets. The goal is to understand the impact on the performance of a neural network trained on differences, as compared to one trained in the usual way, of parameters such as the size of the training set, the number of epochs of training or the number of similar cases retrieved. We find that learning from differences achieves similar or better results than the ones of a neural network trained in the usual way. Our most significant finding however is that, in all cases, difference-based networks start obtaining good results from a low number of epochs, compared to the one required by a neural network trained in the usual manner. In other words, they achieve similar results while requiring less training.
Origine | Fichiers produits par l'(les) auteur(s) |
---|