A Mouth Full of Words: Visually Consistent Acoustic Redubbing

Sarah Taylor, Barry-John Theobald, Iain Matthews

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)
113 Downloads (Pure)


This paper introduces a method for automatic redubbing of video that exploits the many-to-many mapping of phoneme sequences to lip movements modelled as dynamic visemes [1]. For a given utterance, the corresponding dynamic viseme sequence is sampled to construct a graph of possible phoneme sequences that synchronize with the video. When composed with a pronunciation dictionary and language model, this produces a vast number of word sequences that are in sync with the original video, literally putting plausible words into the mouth of the speaker. We demonstrate that traditional, one-to-many, static visemes lack flexibility for this application as they produce significantly fewer word sequences. This work explores the natural ambiguity in visual speech and offers insight for automatic speech recognition and the importance of language modeling.
Original languageEnglish
Title of host publication2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
PublisherThe Institute of Electrical and Electronics Engineers (IEEE)
ISBN (Electronic)978-1-4673-6997-8
Publication statusPublished - 6 Aug 2015
EventInternational Conference on Acoustics, Speech and Signal Processing - Brisbane, Australia
Duration: 19 Apr 201524 Apr 2015


ConferenceInternational Conference on Acoustics, Speech and Signal Processing


  • Audio-visual speech
  • dynamic visemes
  • acoustic redubbing

Cite this