Synthesising visual speech using dynamic visemes and deep learning architectures

Ausdang Thangthai, Ben Milner (Lead Author), Sarah Taylor

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)
35 Downloads (Pure)

Abstract

This paper proposes and compares a range of methods to improve the naturalness of visual speech synthesis. A feedforward deep neural network (DNN) and many-to-one and many-to-many recurrent neural networks (RNNs) using long short-term memory (LSTM) are considered. Rather than using acoustically derived units of speech, such as phonemes, viseme representations are considered and we propose using dynamic visemes together with a deep learning framework. The input feature representation to the models is also investigated and we determine that including wide phoneme and viseme contexts is crucial for predicting realistic lip motions that are sufficiently smooth but not under-articulated. A detailed objective evaluation across a range of system configurations shows that a combined dynamic viseme-phoneme speech unit combined with a many-to-many encoder-decoder architecture models visual co-articulations effectively. Subjective preference tests reveal there to be no significant difference between animations produced using this system and using ground truth facial motion taken from the original video. Furthermore, the dynamic viseme system also outperforms significantly conventional phoneme-driven speech animation systems.
Original languageEnglish
Pages (from-to)101-119
Number of pages19
JournalComputer Speech and Language
Volume55
Early online date16 Nov 2018
DOIs
Publication statusPublished - May 2019

Keywords

  • Deep neural network
  • Dynamic visemes
  • Talking head
  • Visual speech synthesis

Cite this