Near-videorealistic synthetic visual speech using non-rigid appearance models

BJ Theobald, GC Cawley, I Matthews, JA Bangham

Research output: Contribution to conferencePaper

3 Citations (Scopus)

Abstract

We present work towards videorealistic synthetic visual speech using non-rigid appearance models. These models are used to track a talking face enunciating a set of training sentences. The resultant parameter trajectories are used in a concatenative synthesis scheme, where samples of original data are extracted from a corpus and concatenated to form new unseen sequences. Here we explore the effect on the synthesiser output of blending several synthesis units considered similar to the desired unit. We present preliminary subjective and objective results used to judge the realism of the system.
Original languageEnglish
Pages800-803
Number of pages4
DOIs
Publication statusPublished - 2003
EventIEEE International Conference on Acoustics, Speech and Signal Processing - Hong Kong, China
Duration: 6 Apr 200310 Apr 2003

Conference

ConferenceIEEE International Conference on Acoustics, Speech and Signal Processing
Abbreviated titleICASSP '03
Country/TerritoryChina
CityHong Kong
Period6/04/0310/04/03

Cite this