Automatic bias correction for testing in high‐dimensional linear models

Jing Zhou, Gerda Claeskens

Research output: Contribution to journalArticlepeer-review


Hypothesis testing is challenging due to the test statistic's complicated asymptotic distribution when it is based on a regularized estimator in high dimensions. We propose a robust testing framework for l1-regularized M-estimators to cope with non-Gaussian distributed regression errors, using the robust approximate message passing algorithm. The proposed framework enjoys an automatically built-in bias correction and is applicable with general convex nondifferentiable loss functions which also allows inference when the focus is a conditional quantile instead of the mean of the response. The estimator compares numerically well with the debiased and desparsified approaches while using the least squares loss function. The use of the Huber loss function demonstrates that the proposed construction provides stable confidence intervals under different regression error distributions.
Original languageEnglish
Pages (from-to)71-98
Number of pages28
JournalStatistica Neerlandica
Issue number1
Early online date1 Jul 2022
Publication statusPublished - Feb 2023


  • approximate message passing algorithm
  • confidence interval
  • high-dimensional linear model
  • hypothesis testing
  • loss function
  • ℓ -regularization

Cite this