Home

Calibrated zero-norm regularized LS estimator for high-dimensional error-in-variables regression
2019-05-08 16:54:53

This talk is concerned with high-dimensional error-in-variables regression that aims at identifying a small number of important interpretable factors for corrupted data from the applications where measurement errors or missing data can not beignored. Motivated by CoCoLasso due to Datta and Zou (2017) and the advantage of the zero-norm regularized LS estimator over Lasso for clean data, we propose a calibrated zero-norm regularized LS (CaZnRLS) estimator by constructing a calibrated least squares loss with a positive definite projection of an unbiased surrogate for the covariance matrix of covariates, and use the multi-stage convex relaxation approach to compute this estimator. Under a restricted strong convexity on the true covariate matrix, we derive the l_2-error bound of every iterate and establish the decreasing of the error bound sequence and the sign consistency of the iterates after finite steps. The statistical guarantees are also provided for the CaZnRLS estimator under two types of measurement errors. Numerical comparisons with CoCoLasso and NCL (the nonconvex Lasso of Loh and Wainwright (2012)) show that CaZn-RLS has better relative RMSE as well as comparable even more correctly identified predictors