Learning computational models of video memorability from fMRI brain imaging

Junwei Han, Changyuan Chen, Ling Shao, Xintao Hu, Jungong Han, Tianming Liu

Research output: Contribution to journalArticlepeer-review

57 Citations (Scopus)

Abstract

Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.
Original languageEnglish
Pages (from-to)1692-1703
JournalIEEE Transactions on Cybernetics
Volume45
Issue number8
Early online date9 Oct 2014
DOIs
Publication statusPublished - Aug 2015

Cite this