Practical complexity control in multilayer perceptrons
Résumé
Model selection, i.e. discovering the model which provides the best approximation to an input–output relationship is a key problem of supervised learning. For flexible or non-parametric models, this is often performed via the control of model complexity. This paper is aimed as an introduction to these methods in the context of neural networks; it illustrates and analyzes the effect and behaviour of simple and practical complexity control techniques using an artificial problem. The paper is focused on multilayer perceptrons, which are among the most popular non-linear regression and classification models. It first provides a brief review of model selection and complexity control techniques which have been proposed in the neural network community or adapted from statistics. Simple complexity control methods which have been found well suited for practical applications are then introduced and an experimental analysis which is aimed at illustrating why and how these methods do work is described. The dependency of overfitting on neural networks complexity is analysed, and within the perspective of the bias-variance trade-off, the error evolution and the effects of these techniques is characterized. Different tools for analyzing the effects of complexity control on the behaviour of multilayer perceptrons are then introduced in order to provide complementary insights on the observed behaviour.