Incremental Learning Algorithms for Classification and Regression: local strategies
Résumé
We present a new local strategy to solve incremental learning tasks. It allows to avoid re‐learning of all the parameters by selecting a working subset where the incremental learning is performed. While this procedure can be applied to various schemes (hybrid decision trees, committee machines), we illustrate it with Support Vector Machines based on local kernel. We derive and compare three methods to perform the selection procedure: two of them take advantage of the estimation of generalization error by using theoretical error bounds devoted to SVM. Experimental simulations on three typical datasets of machine learning give promising results.