Abstract
Acoustic environments provide many valuable cues for context-aware computing applications. From the acoustic environment we can infer the types of activity, communication modes and other actors involved in the activity. Environmental or background noise can be classified with a high degree of accuracy using recordings from microphones commonly found in PDAs and other consumer devices. We describe an acoustic environment recognition system incorporating an adaptive learning mechanism and its use in a noise tracker. We show how this information is exploited in a mobile context framework. To illustrate our approach we describe a context-aware multimodal weather forecasting service, which accepts spoken or written queries and presents forecast information in several forms, including email, voice and sign language
Original language | English |
---|---|
Pages (from-to) | 241-254 |
Number of pages | 14 |
Journal | Personal and Ubiquitous Computing |
Volume | 10 |
Issue number | 4 |
DOIs | |
Publication status | Published - 2005 |