Detangling robustness in high dimensions: Composite versus model-averaged estimation

Jing Zhou, Gerda Claeskens, Jelena Bradic

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)
14 Downloads (Pure)

Abstract

Robust methods, though ubiquitous in practice, are yet to be fully understood in the context of regularized estimation and high dimensions. Even simple questions become challenging very quickly. For example, classical statistical theory identifies equivalence between model-averaged and composite quantile estimation. However, little to nothing is known about such equivalence between methods that encourage sparsity. This paper provides a toolbox to further study robustness in these settings and focuses on prediction. In particular, we study optimally weighted model-averaged as well as composite l1-regularized estimation. Optimal weights are determined by minimizing the asymptotic mean squared error. This approach incorporates the effects of regularization, without the assumption of perfect selection, as is often used in practice. Such weights are then optimal for prediction quality. Through an extensive simulation study, we show that no single method systematically outperforms others. We find, however, that model-averaged and composite quantile estimators often outperform least-squares methods, even in the case of Gaussian model noise. Real data application witnesses the method’s practical use through the reconstruction of compressed audio signals.
Original languageEnglish
Pages (from-to)2551-2599
Number of pages49
JournalElectronic Journal of Statistics
Volume14
Issue number2
DOIs
Publication statusPublished - 2020

Keywords

  • Approximate message passing
  • L -regularization
  • Mean squared error
  • Quantile regression

Cite this