Abstract
This work proposes a method to exploit both audio and vi- sual speech information to extract a target speaker from a mix- ture of competing speakers. The work begins by taking an ef- fective audio-only method of speaker separation, namely the soft mask method, and modifying its operation to allow visual speech information to improve the separation process. The au- dio input is taken from a single channel and includes the mix- ture of speakers, where as a separate set of visual features are extracted from each speaker. This allows modification of the separation process to include not only the audio speech but also visual speech from each speaker in the mixture. Experimen- tal results are presented that compare the proposed audio-visual speaker separation with audio-only and visual-only methods us- ing both speech quality and speech intelligibility metrics.
Original language | English |
---|---|
Pages | 1517-1521 |
Number of pages | 5 |
Publication status | Published - 2015 |
Event | Interspeech 2015 - Dresden, Germany Duration: 6 Sep 2015 → 10 Sep 2015 |
Conference
Conference | Interspeech 2015 |
---|---|
Country/Territory | Germany |
City | Dresden |
Period | 6/09/15 → 10/09/15 |
Profiles
-
Ben Milner
- School of Computing Sciences - Senior Lecturer
- Interactive Graphics and Audio - Member
- Smart Emerging Technologies - Member
Person: Research Group Member, Academic, Teaching & Research