Analysis of Correlation between Audio and Visual Speech Features for Clean Audio Feature Prediction in Noise

Ibrahim Almajai, Ben P. Milner, Jonathan Darch

Research output: Contribution to conferencePaper

14 Citations (Scopus)

Abstract

The aim of this work is to examine the correlation between audio and visual speech features. The motivation is to find visual features that can provide clean audio feature estimates which can be used for speech enhancement when the original audio signal is corrupted by noise. Two audio features (MFCCs and formants) and three visual features (active appearance model, 2-D DCT and cross-DCT) are considered with correlation measured using multiple linear regression. The correlation is then exploited through the development of a maximum a posteriori (MAP) prediction of audio features solely from the visual features. Experiments reveal that features representing broad spectral information have higher correlation to visual features than those representing finer spectral detail. The accuracy of prediction follows the results found in the correlation measurements.
Original languageEnglish
Publication statusPublished - Sep 2006
EventICSLP 9th International Conference on Spoken Language Processing - Pittsburgh, United States
Duration: 17 Sep 200621 Sep 2006

Conference

ConferenceICSLP 9th International Conference on Spoken Language Processing
Abbreviated titleInterspeech 2006
Country/TerritoryUnited States
CityPittsburgh
Period17/09/0621/09/06

Cite this