Pushing the Envelope of Associative Learning: Internal Representations and Dynamic Competition Transform Association into Development

Bob McMurray, Libo Zhao, Sarah C. Kucker, Larissa K. Samuelson

Research output: Chapter in Book/Report/Conference proceedingChapter

16 Citations (Scopus)


Work in learning word meanings has argued that associative learning mechanisms are insufficient because word learning is too fast, confronts too much ambiguity, or is based on social principles. This critiques an outdated view of association, focusing on the information being learned, not the mechanism of learning. The authors present a model that embeds association learning in a richer system, which includes both internal representations to and real-time competition that enable it to select the referent of novel and familiar words. A series of simulations validate these theoretical assumptions showing better learning and novel word inference when both factors are present. The authors then use this model to understand the apparent rapidity of word learning and value of high and low informative learning situations. Finally, the authors scale the model up to examine interactions between auditory and visual categorization and account for conflicting results as to whether words help or hinder categorization.
Original languageEnglish
Title of host publicationTheoretical and Computational Models of Word Learning
Subtitle of host publicationTrends in Psychology and Artificial Intelligence
EditorsLakshmi Gogate, George Hollich
PublisherIGI Global
Number of pages32
ISBN (Electronic)9781466629745
ISBN (Print)9781466629738
Publication statusPublished - 2013

Cite this