Abstract
In previous work, we have argued that it is beneficial to find confidence measures (CM's) that are not dependent on use of "side information" from a specific recogniser. Here, we extend this philosophy to include the use of semantic information in estimating the confidence that a word is correct. We are motivated by the observation that sometimes the recogniser outputs a word which can easily be spotted (by humans) as incorrect, because it bears no relation to the semantics of the rest of the decoded sentence. Latent semantic analysis (LSA) was used as a method for estimating semantic "semantic similarity" between words in a text corpus. From these scores, an average semantic similarity of each decoded word to the other decoded words in an utterance could be estimated, and by thresholding this similarity measure, words were tagged as CORRECT or INCORRECT. We benchmarked the performance of this semantic CM against a tried-and-tested CM, the N-best CM. The precision of the semantic CM was inferior to that of N-best when the recall (the number of words considered) was high, but it out-performed N-best for low recall, and a combined classifier showed the benefits of using both techniques. An interesting and unexpected result was that the semantic CM was better at identifying correct words than incorrect words.
Original language | English |
---|---|
Pages | 206-209 |
Number of pages | 4 |
Publication status | Published - Oct 2000 |
Event | 6th International Conference on Spoken Language Processing - Beijing, China Duration: 16 Oct 2000 → 20 Oct 2000 |
Conference
Conference | 6th International Conference on Spoken Language Processing |
---|---|
Abbreviated title | ICSLP 2000 |
Country/Territory | China |
City | Beijing |
Period | 16/10/00 → 20/10/00 |