A system designed to allow Post Office counter clerks to communicate with deaf customers by translating speech into sign language is described. The system uses approximately 370 pre-stored phrases which may be signed to the customer using a specially designed avatar. The clerk is unable to memorise this number of phrases and therefore the system attempts to map from their input speech to the semantically equivalent pre-stored phrase. We describe a number of language processing techniques developed to perform the mapping, and give results obtained using alternative formulations of the phrases from a number of speakers. We then give results for recognised speech input and show how misrecognitions effect the mapping system. Best performance is obtained using a mapping system based on an entropy weighted, vector based distance measure between the test phrase and each of the signed phrases.
|Number of pages
|Published - 2003
|IEEE Conference on Acoustics, Speech and Signal Processing (ICASSP '03) - Hong Kong
Duration: 6 Apr 2003 → 10 Apr 2003
|IEEE Conference on Acoustics, Speech and Signal Processing (ICASSP '03)
|6/04/03 → 10/04/03