Speaker independent visual-only language identification

Research output: Contribution to conferencePaper

12 Citations (Scopus)

Abstract

We describe experiments in visual-only language identification (VLID), in which only lip shape, appearance and motion are used to determine the language of a spoken utterance. In previous work, we had shown that this is possible in speaker-dependent mode, i.e. identifying the language spoken by a multi-lingual speaker. Here, by appropriately modifying techniques that have been successful in audio language identification, we extend the work to discriminating two languages in speaker-independent mode. Our results indicate that even with viseme accuracy as low as about 34%, reasonable discrimination can be obtained. A simulation of degraded accuracy viseme recognition performance indicates that high VLID accuracy should be achievable with viseme recognition errors of the order of 50%.
Original languageEnglish
Pages5026-5029
Number of pages4
DOIs
Publication statusPublished - Mar 2010
EventIEEE International Conference on Acoustics, Speech, and Signal Processing - Dallas, United States
Duration: 14 Mar 201019 Mar 2010

Conference

ConferenceIEEE International Conference on Acoustics, Speech, and Signal Processing
Country/TerritoryUnited States
CityDallas
Period14/03/1019/03/10

Cite this