Creating expressive three-dimensional talking faces

Project Details

Description

The broad aim of this work is to develop a life-like expressive talking head. This is difficult to achieve as we are extremely sensitive to subtle changes in the features of the face. Flaws in animated sequences are easy to detect and severely degrade the perceived quality of the output. This is especially true for systems that strive for videorealism (indistinguishable from real video).

All previous approaches that achieve close to videorealism are
image-based and two-dimensional; the pose of the character is always face-on, emotion is usually ignored, and the vocabulary is often limited. This work will overcome, for the first time, all of these limitations.

To generate realistic animated sequences, a user need only supply the text (or voice) of the sentence they wish to animate. Contrast this with animation studios, such as Pixar, that require months (or years) of manual tuning of animation parameters to create realistic animated sequences. Of course, these sequences are limited to the script of the movie - to generate further sequences would require further manual specification of the parameters. This system will generate expressive visual speech for any arbitrary utterance from the limited training data available, without the need for user intervention. Also, a user can specify a desired expression, e.g. a happy expression for good news, and the output will automatically be adapted to that expression.
StatusFinished
Effective start/end date5/06/064/10/08

Funding

  • Engineering and Physical Sciences Research Council: £114,825.00