Real-time expression cloning using appearance models

Barry-John Theobald, Iain A. Matthews, Jeffrey F. Cohn, Steven M. Boker

Research output: Contribution to conferencePaper

31 Citations (Scopus)

Abstract

Active Appearance Models (AAMs) are generative parametric models commonly used to track, recognise and synthesise faces in images and video sequences. In this paper we describe a method for transferring dynamic facial gestures between subjects in real-time. The main advantages of our approach are that: 1) the mapping is computed automatically and does not require high-level semantic information describing facial expressions or visual speech gestures. 2) The mapping is simple and intuitive, allowing expressions to be transferred and rendered in real-time. 3) The mapped expression can be constrained to have the appearance of the target producing the expression, rather than the source expression imposed onto the target face. 4) Near-videorealistic talking faces for new subjects can be created without the cost of recording and processing a complete training corpus for each. Our system enables face-to-face interaction with an avatar driven by an AAM of an actual person in real-time and we show examples of arbitrary expressive speech frames cloned across different subjects.
Original languageEnglish
Pages134-139
Number of pages6
DOIs
Publication statusPublished - 2007
Event9th International Conference on Multimodal Interfaces - Nagoya, Aichi, Japan
Duration: 12 Nov 200715 Nov 2007

Conference

Conference9th International Conference on Multimodal Interfaces
Country/TerritoryJapan
CityNagoya, Aichi
Period12/11/0715/11/07

Cite this