Linguistic modelling and language-processing technologies for Avatar-based sign language presentation

R. Elliott, JRW Glauert, JR Kennaway, I Marshall, E Safar

Research output: Contribution to journalArticlepeer-review

82 Citations (Scopus)

Abstract

Sign languages are the native languages for many pre-lingually deaf people and must be treated as genuine natural languages worthy of academic study in their own right. For such pre-lingually deaf, whose familiarity with their local spoken language is that of a second language learner, written text is much less useful than is commonly thought. This paper presents research into sign language generation from English text at the University of East Anglia that has involved sign language grammar development to support synthesis and visual realisation of sign language by a virtual human avatar. One strand of research in the ViSiCAST and eSIGN projects has concentrated on the generation in real time of sign language performance by a virtual human (avatar) given a phonetic-level description of the required sign sequence. A second strand has explored generation of such a phonetic description from English text. The utility of the conducted research is illustrated in the context of sign language synthesis by a preliminary consideration of plurality and placement within a grammar for British Sign Language (BSL). Finally, ways in which the animation generation subsystem has been used to develop signed content on public sector Web sites are also illustrated.
Original languageEnglish
Pages (from-to)375-391
Number of pages17
JournalUniversal Access in the Information Society
Volume6
Issue number4
DOIs
Publication statusPublished - Feb 2008

Cite this