Modeling cross-modal interactions in early word learning

Nadja Althaus, Denis Mareschal

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)


Infancy research demonstrating a facilitation of visual category formation in the presence of verbal labels suggests that infants' object categories and words develop interactively. This contrasts with the notion that words are simply mapped "onto" previously existing categories. To investigate the computational foundations of a system in which word and object categories develop simultaneously and in an interactive fashion, we present a model of word learning based on interacting self-organizing maps that represent the auditory and visual modalities, respectively. While other models of lexical development have employed similar dual-map architectures, our model uses active Hebbian connections to propagate activation between the visual and auditory maps during learning. Our results show that categorical perception emerges from these early audio-visual interactions in both domains. We argue that the learning mechanism introduced in our model could play a role in the facilitation of infants' categorization through verbal labeling.

Original languageEnglish
Pages (from-to)288-297
Number of pages10
JournalIEEE Transactions on Autonomous Mental Development
Issue number4
Early online date28 Jun 2013
Publication statusPublished - Dec 2013


  • Computational modeling
  • Learning systems
  • Self-organizing networks
  • Speech processing
  • Text processing
  • word learning
  • Categorization
  • cross-modal interactions
  • self-organizing maps

Cite this