Abstract
The M5 forecasting competition has provided strong empirical evidence that machine learning methods can outperform statistical methods: in essence, complex methods can be more accurate than simple ones. Regardless, this result challenges the flagship empirical result that led the forecasting discipline for the last four decades: keep methods sophisticatedly simple. Nevertheless, this was a first, and we can argue that this will not happen again. There has been a different winner in each forecasting competition. This inevitably raises the question: can a method win more than once (and should it be expected to)? Furthermore, we argue for the need to elaborate on the perks of competing methods, and what makes them winners?
Original language | English |
---|---|
Pages (from-to) | 1519-1525 |
Number of pages | 7 |
Journal | International Journal of Forecasting |
Volume | 38 |
Issue number | 4 |
Early online date | 7 Jun 2022 |
DOIs | |
Publication status | Published - Oct 2022 |
Keywords
- Benchmarks
- Competitions
- Forecasting
- Machine learning
- Performance