首页> 外文期刊>Journal of applied statistics >Variable selection via penalized minimum φ-divergence estimation in logistic regression
【24h】

Variable selection via penalized minimum φ-divergence estimation in logistic regression

机译:Logistic回归中基于罚最小φ偏差估计的变量选择

获取原文
获取原文并翻译 | 示例
       

摘要

We propose penalized minimum φ-divergence estimator for parameter estimation and variable selection in logistic regression. Using an appropriate penalty function, we show that penalized φ-divergence estimator has oracle property. With probability tending to 1, penalized φ-divergence estimator identifies the true model and estimates nonzero coefficients as efficiently as if the sparsity of the true model was known in advance. The advantage of penalized φ-divergence estimator is that it produces estimates of nonzero parameters efficiently than penalized maximum likelihood estimator when sample size is small and is equivalent to it for large one. Numerical simulations confirm our findings.
机译:我们提出了惩罚最小φ-发散估计器,用于逻辑回归中的参数估计和变量选择。使用适当的惩罚函数,我们证明了惩罚的φ-散度估计量具有oracle属性。当概率趋于1时,受罚φ散度估计器可以识别真实模型,并像预先知道真实模型的稀疏性一样有效地估计非零系数。惩罚型φ散度估计器的优势在于,当样本量较小时,与惩罚性最大似然估计器相比,它能有效生成非零参数的估计值,并且等效于大样本值。数值模拟证实了我们的发现。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号