We describe a method for automatically synthesizing deaf signing animations from a high-level description of signs in terms of the HamNoSys transcription system. Lifelike movement is achieved by combining a simple control model of hand movement with inverse kinematic calculations for placement of the arms. The realism can be further enhanced by mixing the synthesized animation with motion capture data for the spine and neck, to add natural "ambient motion".
|Name||Lecture Notes in Computer Science|
|Publisher||Springer Berlin / Heidelberg|
|Workshop||International Gesture Workshop|
|Period||18/04/01 → 20/04/01|