Abstract
Visual lip gestures observed whilst lipreading have a few working definitions, the most common two are: ‘the visual equivalent of a phoneme’ and ‘phonemes which are indistinguishable on the lips’. To date there is no formal definition, in part because to date we have not established a two-way relationship or mapping between visemes and phonemes. Some evidence suggests that visual speech is highly dependent upon the speaker. So here, we use a phoneme-clustering method to form new phoneme-to-viseme maps for both individual and multiple speakers. We test these phoneme to viseme maps to examine how similarly speakers talk visually and we use signed rank tests to measure the distance between individuals. We conclude that broadly speaking, speakers have the same repertoire of mouth gestures, where they differ is in the use of the gestures.
Original language | English |
---|---|
Pages (from-to) | 165-190 |
Journal | Computer Speech and Language |
Volume | 52 |
Early online date | 21 May 2018 |
DOIs | |
Publication status | Published - Nov 2018 |
Keywords
- Visual speech
- Lipreading
- Recognition
- Audio-visual
- Speech
- Classification
- Viseme
- Phoneme
- Speaker identity