Computational mechanisms of curiosity and goal-directed exploration

Philipp Schwartenbeck, Johannes Passecker, Tobias U Hauser, Thomas Hb FitzGerald, Martin Kronbichler, Karl J Friston

Research output: Contribution to journalArticlepeer-review

114 Citations (Scopus)
34 Downloads (Pure)


Successful behaviour depends on the right balance between maximising reward and soliciting information about the world. Here, we show how different types of information-gain emerge when casting behaviour as surprise minimisation. We present two distinct mechanisms for goal-directed exploration that express separable profiles of active sampling to reduce uncertainty. 'Hidden state' exploration motivates agents to sample unambiguous observations to accurately infer the (hidden) state of the world. Conversely, 'model parameter' exploration, compels agents to sample outcomes associated with high uncertainty, if they are informative for their representation of the task structure. We illustrate the emergence of these types of information-gain, termed active inference and active learning, and show how these forms of exploration induce distinct patterns of 'Bayes-optimal' behaviour. Our findings provide a computational framework for understanding how distinct levels of uncertainty systematically affect the exploration-exploitation trade-off in decision-making.

Original languageEnglish
Article numbere41703
Publication statusPublished - 10 May 2019

Cite this