Abstract
This paper is motivated by the need to develop low bandwidth virtual humans capable of delivering audio-visual speech and sign language at a quality comparable to high bandwidth video. Using an appearance model combined with parameter compression significantly reduces the number of bits required for animating the face of a virtual human. A perceptual method is used to evaluate the quality of the synthesised sequences and it appears that 3.6 kb s-1 can yield acceptable quality.
| Original language | English |
|---|---|
| Pages (from-to) | 1117-1124 |
| Number of pages | 8 |
| Journal | Image and Vision Computing |
| Volume | 21 |
| Issue number | 13-14 |
| DOIs | |
| Publication status | Published - 1 Dec 2003 |
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver