Which Phoneme-to-Viseme Maps Best Improve Visual-Only Computer Lip-Reading?

Helen L. Bear, Richard Harvey, Barry Theobald, Yuxuan Lan

Research output: Contribution to conferencePaperpeer-review

26 Citations (Scopus)

Abstract

A critical assumption of all current visual speech recognition systems is that there are visual speech units called visemes which can be mapped to units of acoustic speech, the phonemes. Despite there being a number of published maps it is infrequent to see the effectiveness of these tested, particularly on visual-only lip-reading (many works use audio-visual speech). Here we examine 120 mappings and consider if any are stable across talkers. We show a method for devising maps based on phoneme confusions from an automated lip-reading system, and we present new mappings that show improvements for individual talkers.
Original languageEnglish
Pages230-239
Number of pages10
Publication statusPublished - 3 Dec 2014
EventInternational Symposium on Visual Computing - Nevada, Las Vegas, United States
Duration: 8 Dec 201410 Dec 2014

Conference

ConferenceInternational Symposium on Visual Computing
Country/TerritoryUnited States
CityLas Vegas
Period8/12/1410/12/14

Cite this