Reconstructing intelligible audio speech from visual speech features

Ben Milner, Thomas Le Cornu

Research output: Contribution to conferencePaperpeer-review

36 Citations (SciVal)
28 Downloads (Pure)

Abstract

This work describes an investigation into the feasibility of producing intelligible audio speech from only visual speech fea- tures. The proposed method aims to estimate a spectral enve- lope from visual features which is then combined with an arti- ficial excitation signal and used within a model of speech pro- duction to reconstruct an audio signal. Different combinations of audio and visual features are considered, along with both a statistical method of estimation and a deep neural network. The intelligibility of the reconstructed audio speech is measured by human listeners, and then compared to the intelligibility of the video signal only and when combined with the reconstructed audio.
Original languageEnglish
Publication statusPublished - 2015
EventInterspeech 2015 - Dresden, Germany
Duration: 6 Sept 201510 Sept 2015

Conference

ConferenceInterspeech 2015
Country/TerritoryGermany
CityDresden
Period6/09/1510/09/15

Cite this