首页> 外文会议>Uncertainty in Artificial Intelligence >Almost-everywhere algorithmic stability and generalization error
【24h】

Almost-everywhere algorithmic stability and generalization error

机译:几乎无处不在的算法稳定性和泛化误差

获取原文
获取外文期刊封面目录资料

摘要

We explore in some detail the notion of algorithmic stability as a viable framework for analyzing the generalization error of learning algorithms. We introduce the new notion of training stability of a learning algorithm and show that, in a general setting, it is sufficient for good bounds on generalization error. In the PAC setting, training stability is both necessary and sufficient for learnability. The approach based on training stability makes no reference to VC dimension or VC entropy. There is no need to prove uniform convergence, and generalization error is bounded directly via an extended McDiarmid inequality. As a result it potentially allows us to deal with a broader class of learning algorithms than Empirical Risk Minimization. We also explore the relationships among VC dimension, generalization error, and various notions of stability. Several examples of learning algorithms are considered.
机译:我们将详细探讨算法稳定性的概念,将其作为分析学习算法泛化误差的可行框架。我们介绍了一种学习算法的训练稳定性的新概念,并表明,在一般情况下,它足以很好地概括泛化误差。在PAC环境中,培训的稳定性对于学习而言既必要又充分。基于训练稳定性的方法未参考VC维数或VC熵。无需证明均匀收敛,并且通过扩展的McDiarmid不等式可以直接限制泛化误差。因此,与经验风险最小化相比,它可能使我们能够处理更广泛的学习算法。我们还探讨了VC维度,泛化误差和各种稳定性概念之间的关系。考虑了学习算法的几个例子。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号