首页> 外文期刊>Applied and Computational Harmonic Analysis >Balancing principle in supervised learning for a general regularization scheme
【24h】

Balancing principle in supervised learning for a general regularization scheme

机译:一般正则化方案的监督学习中的平衡原理

获取原文
获取原文并翻译 | 示例

摘要

We discuss the problem of parameter choice in learning algorithms generated by a general regularization scheme. Such a scheme covers well-known algorithms as regularized least squares and gradient descent learning. It is known that in contrast to classical deterministic regularization methods, the performance of regularized learning algorithms is influenced not only by the smoothness of a target function, but also by the capacity of a space, where regularization is performed. In the infinite dimensional case the latter one is usually measured in terms of the effective dimension. In the context of supervised learning both the smoothness and effective dimension are intrinsically unknown a priori. Therefore we are interested in a posteriori regularization parameter choice, and we propose a new form of the balancing principle. An advantage of this strategy over the known rules such as cross-validation based adaptation is that it does not require any data splitting and allows the use of all available labeled data in the construction of regularized approximants. We provide the analysis of the proposed rule and demonstrate its advantage in simulations. (C) 2018 Elsevier Inc. All rights reserved.
机译:我们讨论了由一般正则化方案生成的学习算法中的参数选择问题。这种方案涵盖了众所周知的算法,例如正则化最小二乘和梯度下降学习。众所周知,与经典的确定性正则化方法相比,正则化学习算法的性能不仅受目标函数的平滑度的影响,而且还受执行正则化的空间容量的影响。在无限尺寸的情况下,后一个通常以有效尺寸来衡量。在监督学习的背景下,平滑度和有效维度本质上是先验未知的。因此,我们对后验正则化参数选择感兴趣,并提出了一种新形式的平衡原理。该策略相对于诸如基于交叉验证的自适应之类的已知规则的优势在于,它不需要任何数据拆分,并允许在构造正则近似值时使用所有可用的标记数据。我们提供对建议规则的分析,并在仿真中证明其优势。 (C)2018 Elsevier Inc.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号