Hypothesis testing is challenging due to the test statistic's complicated asymptotic distribution when it is based on a regularized estimator in high dimensions. We propose a robust testing framework for l1-regularized M-estimators to cope with non-Gaussian distributed regression errors, using the robust approximate message passing algorithm. The proposed framework enjoys an automatically built-in bias correction and is applicable with general convex nondifferentiable loss functions which also allows inference when the focus is a conditional quantile instead of the mean of the response. The estimator compares numerically well with the debiased and desparsified approaches while using the least squares loss function. The use of the Huber loss function demonstrates that the proposed construction provides stable confidence intervals under different regression error distributions.
- approximate message passing algorithm
- confidence interval
- high-dimensional linear model
- hypothesis testing
- loss function
- ℓ -regularization