Voicing classification of visual speech using convolutional neural networks

Thomas Le Cornu, Ben Milner

Research output: Contribution to conferencePaper

22 Downloads (Pure)

Abstract

The application of neural network and convolutional neural net- work (CNN) architectures is explored for the tasks of voicing classification (classifying frames as being either non-speech, unvoiced, or voiced) and voice activity detection (VAD) of vi- sual speech. Experiments are conducted for both speaker de- pendent and speaker independent scenarios.
A Gaussian mixture model (GMM) baseline system is de- veloped using standard image-based two-dimensional discrete cosine transform (2D-DCT) visual speech features, achieving speaker dependent accuracies of 79% and 94%, for voicing classification and VAD respectively. Additionally, a single- layer neural network system trained using the same visual fea- tures achieves accuracies of 86 % and 97 %. A novel technique using convolutional neural networks for visual speech feature extraction and classification is presented. The voicing classifi- cation and VAD results using the system are further improved to 88 % and 98 % respectively.
The speaker independent results show the neural network system to outperform both the GMM and CNN systems, achiev- ing accuracies of 63 % for voicing classification, and 79 % for voice activity detection.
Original languageEnglish
Publication statusPublished - 2015
EventFAAVSP - The 1st Joint Conference on Facial Analysis, Animation and Auditory-Visual Speech Processing - Austria, Vienna, Austria
Duration: 11 Sep 201513 Sep 2015

Conference

ConferenceFAAVSP - The 1st Joint Conference on Facial Analysis, Animation and Auditory-Visual Speech Processing
Abbreviated titleFAAVSP 2015
CountryAustria
CityVienna
Period11/09/1513/09/15

Cite this