Feature induction by backpropagation - Archive ouverte HAL
Communication Dans Un Congrès Année : 1994

Feature induction by backpropagation

Résumé

A method for investigating the internal knowledge representation constructed by neural net learning is described: it is shown how from a given weight matrix defining a feedforward artificial neural net, we can induce characteristic patterns of each of the classes of inputs classified by that net. These characteristic patterns, called prototypes, are found by a gradient descent search of the space of inputs. After an exposition of the theory, results are given for the well known LED recognition problem where a network simulates recognition of decimal digits displayed on a seven-segment LED display. Contrary to theoretical intuition, the experimental results indicate that the computed prototypes retain only some of the features of the original input patterns. Thus it appears that the indicated method extracts those features deemed significant by the net.
Fichier principal
Vignette du fichier
Ronald1994.pdf (835.32 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03526439 , version 1 (23-03-2024)

Licence

Identifiants

Citer

Edmund Ronald, Marc Schoenauer, Michèle Sebag. Feature induction by backpropagation. 1994 IEEE International Conference on Neural Networks (ICNN'94), Jun 1994, Orlando, United States. pp.531-534 vol.1, ⟨10.1109/ICNN.1994.374220⟩. ⟨hal-03526439⟩
40 Consultations
19 Téléchargements

Altmetric

Partager

More