Fathoming empirical forecasting competitions’ winners

Azzam Alroomi, Georgios Karamatzanis, Konstantinos Nikolopoulos, Anna Tilba, Shujun Xiao

Research output: Contribution to journalArticlepeer-review

2 Downloads (Pure)

Abstract

The M5 forecasting competition has provided strong empirical evidence that machine learning methods can outperform statistical methods: in essence, complex methods can be more accurate than simple ones. Regardless, this result challenges the flagship empirical result that led the forecasting discipline for the last four decades: keep methods sophisticatedly simple. Nevertheless, this was a first, and we can argue that this will not happen again. There has been a different winner in each forecasting competition. This inevitably raises the question: can a method win more than once (and should it be expected to)? Furthermore, we argue for the need to elaborate on the perks of competing methods, and what makes them winners?

Original languageEnglish
Pages (from-to)1519-1525
Number of pages7
JournalInternational Journal of Forecasting
Volume38
Issue number4
Early online date7 Jun 2022
DOIs
Publication statusPublished - Oct 2022

Keywords

  • Benchmarks
  • Competitions
  • Forecasting
  • Machine learning
  • Performance

Cite this