Improving computer lipreading via DNN sequence discriminative training techniques

Kwanchiva Thangthai, Richard Harvey

Research output: Chapter in Book/Report/Conference proceedingConference contribution

14 Citations (Scopus)
23 Downloads (Pure)


Although there have been some promising results in computer lipreading, there has been a paucity of data on which to train automatic systems. However the recent emergence of the TCD-TIMIT corpus, with around 6000 words, 59 speakers and seven hours of recorded audio-visual speech, allows the deployment of more recent techniques in audio-speech such as Deep Neural Networks (DNNs) and sequence discriminative training.

In this paper we combine the DNN with a Hidden Markov Model (HMM) to the, so called, hybrid DNN-HMM configuration which we train using a variety of sequence discriminative training methods. This is then followed with a weighted finite state transducer. The conclusion is that the DNN offers very substantial improvement over a conventional classifier which uses a Gaussian Mixture Model (GMM) to model the densities even when optimised with Speaker Adaptive Training. Sequence adaptive training offers further improvements depending on the precise variety employed but those improvements are of the order of ~10\% improvement in word accuracy. Putting these two results together implies that lipreading is moving from something of rather esoteric interest to becoming a practical reality in the foreseeable future.
Original languageEnglish
Title of host publicationProceedings of Interspeech 2017
Number of pages5
Publication statusPublished - 1 Aug 2017


  • visual-only speech recognition
  • computer lipreading

Cite this