MMLI: Multimodal Multiperson Corpus of Laughter in Interaction

Radoslaw Niewiadomski, Maurizio Mancini, Tobias Baur, Giovanna Varni, Harry Griffin, Min S. H. Aung

Research output: Chapter in Book/Report/Conference proceedingChapter

22 Citations (Scopus)

Abstract

The aim of the Multimodal and Multiperson Corpus of Laughter in Interaction (MMLI) was to collect multimodal data of laughter with the focus on full body movements and different laughter types. It contains both induced and interactive laughs from human triads. In total we collected 500 laugh episodes of 16 participants. The data consists of 3D body position information, facial tracking, multiple audio and video channels as well as physiological data.

In this paper we discuss methodological and technical issues related to this data collection including techniques for laughter elicitation and synchronization between different independent sources of data. We also present the enhanced visualization and segmentation tool used to segment captured data. Finally we present data annotation as well as preliminary results of the analysis of the nonverbal behavior patterns in laughter.
Original languageEnglish
Title of host publicationHuman Behavior Understanding
PublisherSpringer
Chapter16
Pages184-195
Number of pages12
ISBN (Electronic)978-3-319-02714-2
ISBN (Print)978-3-319-02713-5
DOIs
Publication statusPublished - 2013

Publication series

NameHuman Behavior Understanding
Volume8212
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Cite this