The aim of this work is to examine the correlation between audio and visual speech features. The motivation is to find visual features that can provide clean audio feature estimates which can be used for speech enhancement when the original audio signal is corrupted by noise. Two audio features (MFCCs and formants) and three visual features (active appearance model, 2-D DCT and cross-DCT) are considered with correlation measured using multiple linear regression. The correlation is then exploited through the development of a maximum a posteriori (MAP) prediction of audio features solely from the visual features. Experiments reveal that features representing broad spectral information have higher correlation to visual features than those representing finer spectral detail. The accuracy of prediction follows the results found in the correlation measurements.
|Publication status||Published - Sep 2006|
|Event||ICSLP 9th International Conference on Spoken Language Processing - Pittsburgh, United States|
Duration: 17 Sep 2006 → 21 Sep 2006
|Conference||ICSLP 9th International Conference on Spoken Language Processing|
|Abbreviated title||Interspeech 2006|
|Period||17/09/06 → 21/09/06|