Abstract
Active Appearance Models (AAMs) are generative parametric models commonly used to track, recognise and synthesise faces in images and video sequences. In this paper we describe a method for transferring dynamic facial gestures between subjects in real-time. The main advantages of our approach are that: 1) the mapping is computed automatically and does not require high-level semantic information describing facial expressions or visual speech gestures. 2) The mapping is simple and intuitive, allowing expressions to be transferred and rendered in real-time. 3) The mapped expression can be constrained to have the appearance of the target producing the expression, rather than the source expression imposed onto the target face. 4) Near-videorealistic talking faces for new subjects can be created without the cost of recording and processing a complete training corpus for each. Our system enables face-to-face interaction with an avatar driven by an AAM of an actual person in real-time and we show examples of arbitrary expressive speech frames cloned across different subjects.
Original language | English |
---|---|
Pages | 134-139 |
Number of pages | 6 |
DOIs | |
Publication status | Published - 2007 |
Event | 9th International Conference on Multimodal Interfaces - Nagoya, Aichi, Japan Duration: 12 Nov 2007 → 15 Nov 2007 |
Conference
Conference | 9th International Conference on Multimodal Interfaces |
---|---|
Country/Territory | Japan |
City | Nagoya, Aichi |
Period | 12/11/07 → 15/11/07 |