Learning location-invariant orthographic representations for printed words
Abstract
Neural networks were trained with backpropagation to map location-specific letter identities (letters coded as a function of their position in a horizontal array) onto location-invariant lexical representations. Networks were trained on a corpus of 1179 real words, and on artificial lexica in which the importance of letter order was systematically manipulated. Networks were tested with two benchmark phenomena - transposed-letter priming and relative-position priming - thought to reflect flexible orthographic processing in skilled readers. Networks were shown to exhibit the desired priming effects, and the sizes of the effects were shown to depend on the relative importance of letter order information for performing location-invariant mapping. Presenting words at different locations was found to be critical for building flexible orthographic representations in these networks, since this flexibility was absent when stimulus location did not vary.
Domains
Cognitive science
Fichier principal
Learning location invariant orthographic representations for printed words.pdf (355.12 Ko)
Télécharger le fichier
Origin | Files produced by the author(s) |
---|