Abstract
In this paper we describe how 2D appearance models can be applied to the problem of creating a near-videorealistic talking head. A speech corpus of a talker uttering a set of phonetically balanced training sentences is analysed using a generative model of the human face. Segments of original parameter trajectories corresponding to the synthesis unit are extracted from a codebook, normalised, blended, concatenated and smoothed before being applied to the model to give natural, realistic animations of novel utterances. We also present some early results of subjective tests conducted to determine the realism of the synthesiser.
Original language | English |
---|---|
Pages | 187-192 |
Number of pages | 6 |
Publication status | Published - Sep 2003 |
Event | International Conference on Auditory-Visual Speech Processing - St. Jorioz, France Duration: 4 Sep 2003 → 7 Sep 2003 |
Conference
Conference | International Conference on Auditory-Visual Speech Processing |
---|---|
Abbreviated title | AVSP-2003 |
Country/Territory | France |
City | St. Jorioz |
Period | 4/09/03 → 7/09/03 |