Towards a low bandwidth talking face using appearance models

BJ Theobald, SM Kruse, JA Bangham, GC Cawley

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

This paper is motivated by the need to develop low bandwidth virtual humans capable of delivering audio-visual speech and sign language at a quality comparable to high bandwidth video. Using an appearance model combined with parameter compression significantly reduces the number of bits required for animating the face of a virtual human. A perceptual method is used to evaluate the quality of the synthesised sequences and it appears that 3.6 kb s-1 can yield acceptable quality.
Original languageEnglish
Pages (from-to)1117-1124
Number of pages8
JournalImage and Vision Computing
Volume21
Issue number13-14
DOIs
Publication statusPublished - 1 Dec 2003

Cite this