Near-videorealistic synthetic talking faces: Implementation and evaluation

B Theobald, JA Bangham, I Matthews, GC Cawley

Research output: Contribution to journalArticlepeer-review

29 Citations (Scopus)

Abstract

The application of two-dimensional (2D) shape and appearance models to the problem of creating realistic synthetic talking faces is presented. A sample-based approach is adopted, where the face of a talker articulating a series of phonetically balanced training sentences is mapped to a trajectory in a low-dimensional model-space that has been learnt from the training data. Segments extracted from this trajectory corresponding to the synthesis units (e.g. triphones) are temporally normalised, blended, concatenated and smoothed to form a new trajectory, which is mapped back to the image domain to provide a natural, realistic sequence corresponding to the desired (arbitrary) utterance. The system has undergone early subjective evaluation to determine the naturalness of this synthesis approach. Described are tests to determine the suitability of the parameter smoothing method used to remove discontinuities introduced during synthesis at the concatenation boundaries, and tests used to determine how well long term coarticulation effects are reproduced during synthesis using the adopted unit selection scheme. The system has been extended to animate the face of a 3D virtual character (avatar) and this is also described.
Original languageEnglish
Pages (from-to)127-140
Number of pages14
JournalSpeech Communication
Volume44
Issue number1-4
DOIs
Publication statusPublished - Nov 2004

Cite this