Abstract
While the Dynamic Time Warping (DTW) - based Nearest-Neighbor Classification algorithm is regarded as a strong baseline for time series classification, in recent years there has been a plethora of algorithms that have claimed to be able to improve upon its accuracy in the general case. Many of these proposed ideas sacrifice the simplicity of implementation that DTW-based classifiers offer for rather modest gains. Nevertheless, there are clearly times when even a small improvement could make a large difference in an important medical or financial domain. In this work, we make an unexpected claim; an underappreciated “low hanging fruit” in optimizing DTW’s performance can produce improvements that make it an even stronger baseline, closing most or all the improvement gap of the more sophisticated methods. We show that the method currently used to learn DTW’s only parameter, the maximum amount of warping allowed, is likely to give the wrong answer for small training sets. We introduce a simple method to mitigate the small training set issue by creating synthetic exemplars to help learn the parameter. We evaluate our ideas on the UCR Time Series Archive and a case study in fall classification, and demonstrate that our algorithm produces significant improvement in classification accuracy.
Original language | English |
---|---|
Title of host publication | IEEE International Conference on Big Data |
Publisher | The Institute of Electrical and Electronics Engineers (IEEE) |
Pages | 917-922 |
Number of pages | 6 |
DOIs | |
Publication status | Published - Dec 2017 |
Event | 2017 IEEE International Conference on Big Data - Boston, United States Duration: 11 Dec 2017 → 14 Dec 2017 |
Conference
Conference | 2017 IEEE International Conference on Big Data |
---|---|
Country/Territory | United States |
City | Boston |
Period | 11/12/17 → 14/12/17 |