首页> 外文学位 >Regression with smoothly clipped absolute deviation penalty.
【24h】

Regression with smoothly clipped absolute deviation penalty.

机译:具有平滑修剪的绝对偏差惩罚的回归。

获取原文
获取原文并翻译 | 示例

摘要

In linear regression, variable selection becomes important when a large number of predictors are present, for the purpose of interpretation and prediction. As a solution to this, penalized regression can perform variable selection and coefficient estimation simultaneously.; We investigate regression with the smoothly clipped absolute deviation (SCAD) penalty. In particular, we study the asymptotic properties of the SCAD-penalized estimator. In our study, the number of predictors pn is allowed to go to infinity as the number of observations n goes to infinity. Like sample size computation in power analysis, this sheds light on the quality of this method. Our study improves previous results about the SCAD-penalized regression in that we no longer limit our search for the SCAD-penalized estimator to a neighborhood of the true coefficients and still obtain the oracle property of the estimator.; We also extend SCAD-penalized regression to the partially linear model, with the view to get a more interpretable and sparse model in the linear part. Under reasonable assumptions the estimator of the linear coefficients is shown to be consistent in variable selection and asymptotically normal for the nonzero coefficients. At the same time, the estimator of the nonparametric part is globally consistent and can reach the optimal convergence rate in the purely nonparametric setting. Simulation is used to show the finite sample behavior of the estimator.; As another extension, it is applied to the accelerated failure time model. Under certain censoring assumptions, the oracle property continues to hold for the estimator defined as the minimizer of the SCAD-penalized Kaplan-Meier weighted least squares. This extension justifies the use of penalized regression in variable selection and coefficient estimation when the responses are subject to right censoring.; Existing algorithms are adapted to compute the SCAD-penalized least squares estimator in these cases. Large sample theory, matrix inequalities and spline theory are employed to establish the variable selection consistency and asymptotic normality of the SCAD-penalized least squares estimator in these settings. These estimators are also illustrated with real data examples. Finite sample behavior of the estimators is studied via simulation and compared with other widely-used approaches.
机译:在线性回归中,出于解释和预测的目的,当存在大量预测变量时,变量选择变得很重要。作为解决方案,惩罚回归可以同时执行变量选择和系数估计。我们使用平滑限幅的绝对偏差(SCAD)罚分调查回归。特别是,我们研究了SCAD罚估计器的渐近性质。在我们的研究中,随着观测值n变为无穷大,预测变量pn的数量将变为无穷大。就像功效分析中的样本量计算一样,这也阐明了这种方法的质量。我们的研究改进了先前关于SCAD罚分回归的结果,因为我们不再将对SCAD罚分估计的搜索限制为真实系数的邻域,并且仍然获得估计的oracle属性。我们还将SCAD罚分回归扩展到部分线性模型,以期在线性部分获得更可解释和稀疏的模型。在合理的假设下,线性系数的估计值在变量选择中是一致的,并且对于非零系数而言是渐近正态的。同时,非参数部分的估计量是全局一致的,并且可以在纯非参数设置中达到最佳收敛速度。仿真用于显示估计器的有限样本行为。作为另一扩展,它被应用于加速故障时间模型。在某些检查假设下,对于定义为SCAD罚分Kaplan-Meier加权最小二乘法的极小值的估计量,oracle属性继续成立。当响应受到正确审查时,这种扩展证明了在变量选择和系数估计中使用惩罚回归的合理性。在这些情况下,现有算法适用于计算SCAD罚分最小二乘估计器。在这些情况下,采用大样本理论,矩阵不等式和样条理论来建立SCAD罚分最小二乘估计量的变量选择一致性和渐近正态性。这些估算器还带有实际数据示例。通过模拟研究估计器的有限样本行为,并将其与其他广泛使用的方法进行比较。

著录项

  • 作者

    Xie, Huiliang.;

  • 作者单位

    The University of Iowa.;

  • 授予单位 The University of Iowa.;
  • 学科 Statistics.
  • 学位 Ph.D.
  • 年度 2007
  • 页码 135 p.
  • 总页数 135
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 统计学;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号