Orthographic priming from unrelated primes: Heterogeneous feedforward inhibition predicted by associative learning

James S. Adelman, Iliyana V. Trifonova

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)
12 Downloads (Pure)

Abstract

A common assumption among models of orthographic processing is that letter-word inhibitory relationships all share the same strength: activity in the letter B has the same impact on a word like RACE as does equivalent activity in the letter F. However, basic associative learning mechanisms imply that the existence of the neighbor word FACE gives more opportunity to learn a negative weight from the neighbor letter F than from the non-neighbor letter B, leading to stronger negative letter-word weights for neighbor than non-neighbor letters. In masked primed lexical decision, therefore, fity, a neighborly prime formed using neighbor letters, should be a more inhibitory prime for RACE than bund (vice versa for LARK). We present simulations of weight learning using Rescorla and Wagner’s (1972) equations and three experiments consistent with this prediction. Further simulations show heterogeneous feedforward connections from letters to words could contribute to phenomena previously attributed to lexical competition.
Original languageEnglish
Article number104372
JournalJournal of Memory and Language
Volume127
Early online date9 Sep 2022
DOIs
Publication statusPublished - Dec 2022

Cite this