首页> 外文期刊>Computational statistics & data analysis >Group coordinate descent algorithms for nonconvex penalized regression
【24h】

Group coordinate descent algorithms for nonconvex penalized regression

机译:非凸惩罚回归的群坐标下降算法

获取原文
获取原文并翻译 | 示例
           

摘要

We consider the problem of selecting grouped variables in linear regression and generalized linear regression models, based on penalized likelihood. A number of penalty functions have been used for this purpose, including the smoothly clipped absolute deviation (SCAD) penalty and the minimax concave penalty (MCP). These penalty functions, in comparison to the popularly used Lasso, have attractive theoretical properties such as unbiasedness and selection consistency. Although the model fitting methods using these penalties are well developed for individual variable selection, the extension to grouped variable selection is not straightforward, and the fitting can be unstable due to the nonconvexity of the penalty functions. To this end, we propose the group coordinate descent (GCD) algorithms, which extend the regular coordinate descent algorithms. These GCD algorithms are efficient, in that the computation burden only increases linearly with the number of the covariate groups. We also show that using the GCD algorithm, the estimated parameters converge to a global minimum when the sample size is larger than the dimension of the covariates, and converge to a local minimum otherwise. In addition, we demonstrate the regions of the parameter space in which the objective function is locally convex, even though the penalty is nonconvex. In addition to group selection in the linear model, the GCD algorithms can also be extended to generalized linear regression. We present details of the extension using an example of logistic regression. The efficiency of the proposed algorithms are presented through simulation studies and a real data example, in which the MCP based and SCAD based GCD algorithms provide improved group selection results as compared to the group Lasso.
机译:我们考虑了基于惩罚似然法在线性回归模型和广义线性回归模型中选择分组变量的问题。为此已使用了许多惩罚函数,包括平滑限幅的绝对偏差(SCAD)惩罚和最小最大凹痕(MCP)。与广泛使用的套索相比,这些惩罚函数具有吸引人的理论特性,例如无偏性和选择一致性。尽管使用这些罚分的模型拟合方法已经很好地用于单个变量选择,但是对分组变量选择的扩展并不直接,并且由于罚函数的非凸性,拟合可能不稳定。为此,我们提出了组坐标下降(GCD)算法,它扩展了常规坐标下降算法。这些GCD算法高效,因为计算负担仅随协变量组的数量线性增加。我们还表明,使用GCD算法,当样本大小大于协变量的维数时,估计参数收敛于全局最小值,否则,收敛于局部最小值。另外,我们证明了参数空间中目标函数局部凸的区域,即使惩罚是非凸的。除了线性模型中的组选择外,GCD算法还可以扩展到广义线性回归。我们使用Logistic回归示例介绍扩展的详细信息。通过仿真研究和一个实际数据示例,提出了所提出算法的效率,其中基于MCP和基于SCAD的GCD算法与组Lasso相比提供了改进的组选择结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号