Shared Concepts and Topics in High-Performance Statistical Procedures

Adaptive Lasso Selection

Adaptive lasso selection is a modification of lasso selection; in adaptive lasso selection, weights are applied to each of the parameters in forming the lasso constraint (Zou 2006). More precisely, suppose that the response y has mean 0 and the regressors x are scaled to have mean 0 and common standard deviation. Furthermore, suppose that you can find a suitable estimator ModifyingAbove beta With caret of the parameters in the true model and you define a weight vector by w equals 1 slash StartAbsoluteValue ModifyingAbove beta With caret EndAbsoluteValue Superscript gamma, where gamma greater-than-or-equal-to 0. Then the adaptive lasso regression coefficients beta equals left-parenthesis beta 1 comma beta 2 comma ellipsis comma beta Subscript m Baseline right-parenthesis are the solution to the following constrained optimization problem:

minimize StartAbsoluteValue EndAbsoluteValue bold y minus bold upper X bold-italic beta StartAbsoluteValue EndAbsoluteValue squared subject to sigma-summation Underscript j equals 1 Overscript m Endscripts StartAbsoluteValue w Subscript j Baseline beta Subscript j Baseline EndAbsoluteValue less-than-or-equal-to t

PROC HPREG uses the solution to the unconstrained least squares problem as the estimator ModifyingAbove beta With caret. This is appropriate unless collinearity is a concern. If the regressors are collinear or nearly collinear, then Zou (2006) suggests using a ridge regression estimate to form the adaptive weights.

Last updated: December 09, 2022