Feature induction by backpropagation
Résumé
A method for investigating the internal knowledge representation constructed by neural net learning is described: it is shown how from a given weight matrix defining a feedforward artificial neural net, we can induce characteristic patterns of each of the classes of inputs classified by that net. These characteristic patterns, called prototypes, are found by a gradient descent search of the space of inputs. After an exposition of the theory, results are given for the well known LED recognition problem where a network simulates recognition of decimal digits displayed on a seven-segment LED display. Contrary to theoretical intuition, the experimental results indicate that the computed prototypes retain only some of the features of the original input patterns. Thus it appears that the indicated method extracts those features deemed significant by the net.
Origine | Fichiers produits par l'(les) auteur(s) |
---|