Abstract
Despite its importance in social interactions, laughter remains little studied in affective computing. Intelligent virtual agents are often blind to users’ laughter and unable to produce convincing laughter themselves. Respiratory, auditory, and facial laughter signals have been investigated but laughter-related body movements have received less attention. The aim of this study is threefold. First, to probe human laughter perception by analyzing patterns of categorisations of natural laughter animated on a minimal avatar. Results reveal that a low dimensional space can describe perception of laughter “types”. Second, to investigate observers’ perception of laughter (hilarious, social, awkward, fake, and non-laughter) based on animated avatars generated from natural and acted motion-capture data. Significant differences in torso and limb movements are found between animations perceived as laughter and those perceived as non-laughter. Hilarious laughter also differs from social laughter. Different body movement features were indicative of laughter in sitting and standing avatar postures. Third, to investigate automatic recognition of laughter to the same level of certainty as observers’ perceptions. Results show recognition rates of the Random Forest model approach human rating levels. Classification comparisons and feature importance analyses indicate an improvement in recognition of social laughter when localized features and nonlinear models are used.
Original language | English |
---|---|
Pages (from-to) | 165-178 |
Number of pages | 14 |
Journal | IEEE Transactions on Affective Computing |
Volume | 6 |
Issue number | 2 |
DOIs | |
Publication status | Published - 11 Jan 2015 |
Profiles
-
Min Hane Aung
- School of Computing Sciences - Associate Professor in Computing Sciences
- Norwich Epidemiology Centre - Member
- Colour and Imaging Lab - Member
- Smart Emerging Technologies - Member
Person: Research Group Member, Academic, Teaching & Research