Abstract
This work is concerned with using deep neural networks for estimating binary masks within a speech enhancement framework. We first examine the effect of supplementing the audio features used in mask estimation with visual speech information. Visual speech is known to be robust to noise although not necessarily as discriminative as audio features, particularly at higher signal-to-noise ratios. Furthermore, most DNN approaches to mask estimate use the cross-entropy (CE) loss function which aims to maximise classification accuracy. However, we first propose a loss function that aims to maximise the hit minus false-alarm (HIT-FA) rate of the mask, which is known to correlate more closely to speech intelligibility than classification accuracy. We then extend this to a hybrid loss function that combines both the CE and HIT-FA loss functions to provide a balance between classification accuracy and HIT-FA rate of the resulting masks. Evaluations of the perceptually motivated loss functions are carried out using the GRID and larger RM-3000 datasets and show improvements to HIT-FA rate and ESTOI across all noises and SNRs tested. Tests also found that supplementing audio with visual information into a single bimodal audio-visual system gave best performance for all measures
and conditions tested.
and conditions tested.
Original language | English |
---|---|
Pages | 41-46 |
Number of pages | 6 |
DOIs | |
Publication status | Published - Aug 2017 |
Profiles
-
Ben Milner
- School of Computing Sciences - Senior Lecturer
- Interactive Graphics and Audio - Member
- Smart Emerging Technologies - Member
Person: Research Group Member, Academic, Teaching & Research