Spatially generalizable representations of facial expressions: Decoding across partial face samples

Steven G. Greening, Derek G. V. Mitchell, Fraser W. Smith

Research output: Contribution to journalArticlepeer-review

15 Citations (Scopus)
17 Downloads (Pure)


A network of cortical and sub-cortical regions is known to be important in the processing of facial expression. However, to date no study has investigated whether representations of facial expressions present in this network permit generalization across independent samples of face information (e.g. eye region Vs mouth region). We presented participants with partial face samples of five expression categories in a rapid event-related fMRI experiment. We reveal a network of face sensitive regions that contain information about facial expression categories regardless of which part of the face is presented. We further reveal that the neural information present in a subset of these regions:
dorsal prefrontal cortex (dPFC), superior temporal sulcus (STS), lateral occipital and ventral temporal cortex, and even early visual cortex, enables reliable generalization across independent visual inputs (faces depicting the 'eyes only' versus 'eyes removed'). Furthermore, classification performance was correlated to behavioral performance in STS and dPFC. Our results demonstrate that both higher (e.g. STS, dPFC) and lower level cortical regions contain information useful for facial expression decoding that go beyond the visual information presented, and implicate a key role for contextual mechanisms such as cortical feedback in facial expression perception under challenging conditions of visual
Original languageEnglish
Pages (from-to)31-43
Number of pages13
Early online date6 Dec 2017
Publication statusPublished - Apr 2018


  • emotion recognition
  • Facial Expression
  • emotion
  • FMRI
  • MVPA

Cite this