Abstract
This paper proposes and compares a range of methods to improve the naturalness of visual speech synthesis. A feedforward deep neural network (DNN) and many-to-one and many-to-many recurrent neural networks (RNNs) using long short-term memory (LSTM) are considered. Rather than using acoustically derived units of speech, such as phonemes, viseme representations are considered and we propose using dynamic visemes together with a deep learning framework. The input feature representation to the models is also investigated and we determine that including wide phoneme and viseme contexts is crucial for predicting realistic lip motions that are sufficiently smooth but not under-articulated. A detailed objective evaluation across a range of system configurations shows that a combined dynamic viseme-phoneme speech unit combined with a many-to-many encoder-decoder architecture models visual co-articulations effectively. Subjective preference tests reveal there to be no significant difference between animations produced using this system and using ground truth facial motion taken from the original video. Furthermore, the dynamic viseme system also outperforms significantly conventional phoneme-driven speech animation systems.
Original language | English |
---|---|
Pages (from-to) | 101-119 |
Number of pages | 19 |
Journal | Computer Speech and Language |
Volume | 55 |
Early online date | 16 Nov 2018 |
DOIs | |
Publication status | Published - May 2019 |
Keywords
- Deep neural network
- Dynamic visemes
- Talking head
- Visual speech synthesis
Profiles
-
Ben Milner
- School of Computing Sciences - Senior Lecturer
- Data Science and AI - Member
- Interactive Graphics and Audio - Member
- Smart Emerging Technologies - Member
Person: Research Group Member, Academic, Teaching & Research