Predicting Head Pose in Dyadic Conversation

David Greenwood, Stephen Laycock, Iain Matthews

Research output: Chapter in Book/Report/Conference proceedingConference contribution

18 Citations (Scopus)
22 Downloads (Pure)

Abstract

Natural movement plays a significant role in realistic speech animation. Numerous studies have demonstrated the contribution visual cues make to the degree we, as human observers, find an animation acceptable. Rigid head motion is one visual mode that universally co-occurs with speech, and so it is a reasonable strategy to seek features from the speech mode to predict the head pose. Several previous authors have shown that prediction is possible, but experiments are typically confined to rigidly produced dialogue.

Expressive, emotive and prosodic speech exhibit motion patterns that are far more difficult to predict with considerable variation in expected head pose. People involved in dyadic conversation adapt speech and head motion in response to the others’ speech and head motion. Using Deep Bi-Directional Long Short Term Memory (BLSTM) neural networks, we demonstrate that it is possible to predict not just the head motion of the speaker, but also the head motion of the listener from the speech signal.
Original languageEnglish
Title of host publicationInternational Conference on Intelligent Virtual Agents
PublisherSpringer
Pages160-169
Number of pages10
Volume10498
ISBN (Electronic)978-3-319-67401-8
ISBN (Print)978-3-319-67400-1
DOIs
Publication statusPublished - 26 Aug 2017

Publication series

NameIntelligent Virtual Agents
Volume10498
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Cite this