Perception and Automatic Recognition of Laughter from Whole-Body Motion: Continuous and Categorical Perspectives

Harry Griffin, Min Hane Aung, Bernardino Romera-Paredes, Ciaran McLoughlin, Gary McKeown, William Curran, Nadia Bianchi-Berthouze

Research output: Contribution to journalArticle

18 Citations (Scopus)

Abstract

Despite its importance in social interactions, laughter remains little studied in affective computing. Intelligent virtual agents are often blind to users’ laughter and unable to produce convincing laughter themselves. Respiratory, auditory, and facial laughter signals have been investigated but laughter-related body movements have received less attention. The aim of this study is threefold. First, to probe human laughter perception by analyzing patterns of categorisations of natural laughter animated on a minimal avatar. Results reveal that a low dimensional space can describe perception of laughter “types”. Second, to investigate observers’ perception of laughter (hilarious, social, awkward, fake, and non-laughter) based on animated avatars generated from natural and acted motion-capture data. Significant differences in torso and limb movements are found between animations perceived as laughter and those perceived as non-laughter. Hilarious laughter also differs from social laughter. Different body movement features were indicative of laughter in sitting and standing avatar postures. Third, to investigate automatic recognition of laughter to the same level of certainty as observers’ perceptions. Results show recognition rates of the Random Forest model approach human rating levels. Classification comparisons and feature importance analyses indicate an improvement in recognition of social laughter when localized features and nonlinear models are used.
Original languageEnglish
Pages (from-to)165-178
JournalIEEE Transactions on Affective Computing
Volume6
Issue number2
DOIs
Publication statusPublished - 11 Jan 2015

Cite this