Gradient-based MCMC samplers for dynamic causal modelling

Biswa Sengupta (Lead Author), Karl J Friston, Will D. Penny

Research output: Contribution to journalArticlepeer-review

35 Citations (Scopus)


In this technical note, we derive two MCMC (Markov chain Monte Carlo) samplers for dynamic causal models (DCMs). Specifically, we use (a) Hamiltonian MCMC (HMC-E) where sampling is simulated using Hamilton's equation of motion and (b) Langevin Monte Carlo algorithm (LMC-R and LMC-E) that simulates the Langevin diffusion of samples using gradients either on a Euclidean (E) or on a Riemannian (R) manifold. While LMC-R requires minimal tuning, the implementation of HMC-E is heavily dependent on its tuning parameters. These parameters are therefore optimised by learning a Gaussian process model of the time-normalised sample correlation matrix. This allows one to formulate an objective function that balances tuning parameter exploration and exploitation, furnishing an intervention-free inference scheme. Using neural mass models (NMMs)-a class of biophysically motivated DCMs-we find that HMC-E is statistically more efficient than LMC-R (with a Riemannian metric); yet both gradient-based samplers are far superior to the random walk Metropolis algorithm, which proves inadequate to steer away from dynamical instability.

Original languageEnglish
Pages (from-to)1107-1118
Number of pages12
Early online date23 Jul 2015
Publication statusPublished - 15 Jan 2016


  • Algorithms
  • Bayes Theorem
  • Humans
  • Computer-Assisted Image Interpretation
  • Markov Chains
  • Theoretical Models
  • Monte Carlo Method
  • Neuroimaging
  • Comparative Study

Cite this