Abstract
This paper introduces a method for automatic redubbing of video that exploits the many-to-many mapping of phoneme sequences to lip movements modelled as dynamic visemes [1]. For a given utterance, the corresponding dynamic viseme sequence is sampled to construct a graph of possible phoneme sequences that synchronize with the video. When composed with a pronunciation dictionary and language model, this produces a vast number of word sequences that are in sync with the original video, literally putting plausible words into the mouth of the speaker. We demonstrate that traditional, one-to-many, static visemes lack flexibility for this application as they produce significantly fewer word sequences. This work explores the natural ambiguity in visual speech and offers insight for automatic speech recognition and the importance of language modeling.
Original language | English |
---|---|
Title of host publication | 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) |
Publisher | The Institute of Electrical and Electronics Engineers (IEEE) |
Pages | 4904-4908 |
ISBN (Electronic) | 978-1-4673-6997-8 |
DOIs | |
Publication status | Published - 6 Aug 2015 |
Event | International Conference on Acoustics, Speech and Signal Processing - Brisbane, Australia Duration: 19 Apr 2015 → 24 Apr 2015 |
Conference
Conference | International Conference on Acoustics, Speech and Signal Processing |
---|---|
Country/Territory | Australia |
City | Brisbane |
Period | 19/04/15 → 24/04/15 |
Keywords
- Audio-visual speech
- dynamic visemes
- acoustic redubbing