首页> 外文会议>Annual conference on Neural Information Processing Systems >Algorithmic Stability and Uniform Generalization
【24h】

Algorithmic Stability and Uniform Generalization

机译:算法稳定性和统一概括

获取原文

摘要

One of the central questions in statistical learning theory is to determine the conditions under which agents can learn from experience. This includes the necessary and sufficient conditions for generalization from a given finite training set to new observations. In this paper, we prove that algorithmic stability in the inference process is equivalent to uniform generalization across all parametric loss functions. We provide various interpretations of this result. For instance, a relationship is proved between stability and data processing, which reveals that algorithmic stability can be improved by post-processing the inferred hypothesis or by augmenting training examples with artificial noise prior to learning. In addition, we establish a relationship between algorithmic stability and the size of the observation space, which provides a formal justification for dimensionality reduction methods. Finally, we connect algorithmic stability to the size of the hypothesis space, which recovers the classical PAC result that the size (complexity) of the hypothesis space should be controlled in order to improve algorithmic stability and improve generalization.
机译:统计学习理论中的中心问题之一是确定代理商可以从经验中学习的条件。这包括从给定的有限训练集到新观测值进行概括的必要和充分条件。在本文中,我们证明了推理过程中的算法稳定性等效于所有参数损失函数的统一概括。我们对此结果提供各种解释。例如,证明了稳定性和数据处理之间的关系,这表明可以通过对推断的假设进行后处理或在学习之前用人工噪声增强训练示例来提高算法的稳定性。此外,我们建立了算法稳定性与观察空间大小之间的关系,这为降维方法提供了形式上的理由。最后,我们将算法稳定性与假设空间的大小联系起来,从而恢复了经典PAC结果,即应该控制假设空间的大小(复杂度),以提高算法稳定性并提高泛化能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号