Learning the generative principles of a symbol system from limited examples

Lei Yuan, Violet Xiang, David Crandall, Linda Smith

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)


The processes and mechanisms of human learning are central to inquiries in a number of fields including psychology, cognitive science, development, education, and artificial intelligence. Arguments, debates, and controversies linger over the questions of human learning with one of the most contentious being whether simple associative processes could explain human children's prodigious learning, and in doing so, could lead to artificial intelligence that parallels human learning. One phenomenon at the center of these debates concerns a form of far generalization, sometimes referred to as “generative learning”, because the learner's behavior seems to reflect more than co-occurrences among specifically experienced instances and to be based on principles through which new instances may be generated. In two experimental studies (N = 148) of preschool children's learning of how multi-digit number names map to their written forms and in a computational modeling experiment using a deep learning neural network, we show that data sets with a suite of inter-correlated imperfect predictive components yield far and systematic generalizations that accord with generative principles and do so despite limited examples and exceptions in the training data. Implications for human cognition, cognitive development, education, and machine learning are discussed.

Original languageEnglish
Article number104243
Early online date6 Mar 2020
Publication statusPublished - 1 Jul 2020


  • Associative learning
  • Deep learning
  • Education
  • Generative learning
  • Statistical learning
  • Symbol systems

Cite this