Evaluating talking heads based on appearance models

Barry-John Theobald, J. Andrew Bangham, Iain Matthews, Gavin Cawley

Research output: Contribution to conferencePaper

4 Citations (Scopus)

Abstract

In this paper we describe how 2D appearance models can be applied to the problem of creating a near-videorealistic talking head. A speech corpus of a talker uttering a set of phonetically balanced training sentences is analysed using a generative model of the human face. Segments of original parameter trajectories corresponding to the synthesis unit are extracted from a codebook, normalised, blended, concatenated and smoothed before being applied to the model to give natural, realistic animations of novel utterances. We also present some early results of subjective tests conducted to determine the realism of the synthesiser.
Original languageEnglish
Pages187-192
Number of pages6
Publication statusPublished - Sep 2003
EventInternational Conference on Auditory-Visual Speech Processing - St. Jorioz, France
Duration: 4 Sep 20037 Sep 2003

Conference

ConferenceInternational Conference on Auditory-Visual Speech Processing
Abbreviated titleAVSP-2003
Country/TerritoryFrance
CitySt. Jorioz
Period4/09/037/09/03

Cite this