TY - JOUR
T1 - Globally adaptive Quantile regression with ultra-high dimensional data
AU - Zheng, Qi
AU - Peng, Limin
AU - He, Xuming
N1 - Publisher Copyright:
© Institute of Mathematical Statistics, 2015.
PY - 2015/10/1
Y1 - 2015/10/1
N2 - Quantile regression has become a valuable tool to analyze heterogeneous covaraite-response associations that are often encountered in practice. The development of quantile regression methodology for high-dimensional covariates primarily focuses on the examination of model sparsity at a single or multiple quantile levels, which are typically prespecified ad hoc by the users. The resulting models may be sensitive to the specific choices of the quantile levels, leading to difficulties in interpretation and erosion of confidence in the results. In this article, we propose a new penalization framework for quantile regression in the high-dimensional setting. We employ adaptive L1 penalties, and more importantly, propose a uniform selector of the tuning parameter for a set of quantile levels to avoid some of the potential problems with model selection at individual quantile levels. Our proposed approach achieves consistent shrinkage of regression quantile estimates across a continuous range of quantiles levels, enhancing the flexibility and robustness of the existing penalized quantile regression methods. Our theoretical results include the oracle rate of uniform convergence and weak convergence of the parameter estimators. We also use numerical studies to confirm our theoretical findings and illustrate the practical utility of our proposal.
AB - Quantile regression has become a valuable tool to analyze heterogeneous covaraite-response associations that are often encountered in practice. The development of quantile regression methodology for high-dimensional covariates primarily focuses on the examination of model sparsity at a single or multiple quantile levels, which are typically prespecified ad hoc by the users. The resulting models may be sensitive to the specific choices of the quantile levels, leading to difficulties in interpretation and erosion of confidence in the results. In this article, we propose a new penalization framework for quantile regression in the high-dimensional setting. We employ adaptive L1 penalties, and more importantly, propose a uniform selector of the tuning parameter for a set of quantile levels to avoid some of the potential problems with model selection at individual quantile levels. Our proposed approach achieves consistent shrinkage of regression quantile estimates across a continuous range of quantiles levels, enhancing the flexibility and robustness of the existing penalized quantile regression methods. Our theoretical results include the oracle rate of uniform convergence and weak convergence of the parameter estimators. We also use numerical studies to confirm our theoretical findings and illustrate the practical utility of our proposal.
KW - Adaptive penalized quantile regression
KW - Model selection oracle property
KW - Ultra-high dimensional data
KW - Varying covariate effects
UR - https://www.scopus.com/pages/publications/84982104834
U2 - 10.1214/15-AOS1340
DO - 10.1214/15-AOS1340
M3 - Review article
AN - SCOPUS:84982104834
SN - 0090-5364
VL - 43
SP - 2225
EP - 2258
JO - Annals of Statistics
JF - Annals of Statistics
IS - 5
ER -