Evaluating talking heads based on appearance models

B-J Theobald, JA Bangham, I Matthews, GC Cawley

Research output: Contribution to conferencePaper

Abstract

In this paper we describe how 2D appearance models can be applied to the problem of creating a near-videorealistic talking head. A speech corpus of a talker uttering a set of phonetically balanced training sentences is analysed using a generative model of the human face. Segments of original parameter trajectories corresponding to the synthesis unit are extracted from a codebook, normalised, blended, concatenated and smoothed before being applied to the model to give natural, realistic animations of novel utterances. We also present some early results of subjective tests conducted to determine the realism of the synthesiser.
Original languageEnglish
Pages187-192
Number of pages6
Publication statusPublished - Sep 2003
EventProceedings of the International Conference on Auditory-Visual Speech Processing (AVSP-2003) - St. Jorioz, France
Duration: 4 Sep 20037 Sep 2003

Conference

ConferenceProceedings of the International Conference on Auditory-Visual Speech Processing (AVSP-2003)
CountryFrance
CitySt. Jorioz
Period4/09/037/09/03

Cite this