Using visual speech information and perceptually motivated loss functions for binary mask estimation

Danny Websdale, Ben Milner

Research output: Contribution to conferencePaperpeer-review

2 Citations (Scopus)
13 Downloads (Pure)

Abstract

This work is concerned with using deep neural networks for estimating binary masks within a speech enhancement framework. We first examine the effect of supplementing the audio features used in mask estimation with visual speech information. Visual speech is known to be robust to noise although not necessarily as discriminative as audio features, particularly at higher signal-to-noise ratios. Furthermore, most DNN approaches to mask estimate use the cross-entropy (CE) loss function which aims to maximise classification accuracy. However, we first propose a loss function that aims to maximise the hit minus false-alarm (HIT-FA) rate of the mask, which is known to correlate more closely to speech intelligibility than classification accuracy. We then extend this to a hybrid loss function that combines both the CE and HIT-FA loss functions to provide a balance between classification accuracy and HIT-FA rate of the resulting masks. Evaluations of the perceptually motivated loss functions are carried out using the GRID and larger RM-3000 datasets and show improvements to HIT-FA rate and ESTOI across all noises and SNRs tested. Tests also found that supplementing audio with visual information into a single bimodal audio-visual system gave best performance for all measures
and conditions tested.
Original languageEnglish
Pages41-46
Number of pages6
DOIs
Publication statusPublished - Aug 2017

Cite this