首页> 外文会议>Foundations of intelligent systems >Compression and Learning in Linear Regression
【24h】

Compression and Learning in Linear Regression

机译:线性回归中的压缩和学习

获取原文
获取原文并翻译 | 示例

摘要

We introduce a linear regression regularization method based on the minimum description length principle, which aims at both sparsi-fication and over-fit avoidance. We begin by building compact prefix free encryption codes for both rational-valued parameters and integer-valued residuals, then build smooth approximations to their code lengths, as to provide an objective function whose minimization provides optimal lossless compression under certain assumptions. We compare the method against the LASSO on simulated datasets proposed by Tibshirani [14], examining generalization and accuracy in sparsity structure recovery.
机译:我们引入了基于最小描述长度原则的线性回归正则化方法,该方法针对稀疏化和避免过度拟合。我们首先为有理值参数和整数值残差构建紧凑的无前缀加密代码,然后构建其代码长度的平滑近似值,以提供一种目标函数,在某些假设下,目标函数的最小化可提供最佳的无损压缩。我们在Tibshirani [14]提出的模拟数据集上将该方法与LASSO进行了比较,研究了稀疏性结构恢复的一般性和准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号