首页> 外文会议>Machine learning(ML95) >For Every Generalization Action, Is There Really an Equal and Opposite Reaction? Analysis of the Conservation Law for Generalization Performance
【24h】

For Every Generalization Action, Is There Really an Equal and Opposite Reaction? Analysis of the Conservation Law for Generalization Performance

机译:对于每个泛化操作,是否真的存在一个平等和对立的反应?泛化性能守恒律分析

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

The "Conservation Law for Generalization Performance" [Schaffer, 1994] states that for any learning algorithm and bias, "generalization is a zero-sum enterprise." In this paper we study the law and show that while the law is true, the manner in which the Conservation Law adds up generalization performance over all target concepts, without regard to the probability with which each concept occurs, is relevant only in a uniformly random universe. We then introduce a more meaningful measure of generalization, expected generalization performance. Unlike the Conservation Law's measure of generalization performance (which is, in essence, defined to be zero), expected generalization performance is conserved only when certain symmetric properties hold in our universe. There is no reason to believe, a priori, that such symmetries exist; learning algorithms may well exhibit non-zero (expected) generalization performance.
机译:“广义性能守恒定律” [Schaffer,1994]指出,对于任何学习算法和偏见,“广义化都是零和企业。”在本文中,我们研究了法则,并表明虽然法则是正确的,但养护法则将所有目标概念的泛化性能相加的方式(不考虑每个概念出现的可能性)仅在均匀随机中是相关的宇宙。然后,我们介绍一种更有意义的概括度量,即预期的概括性能。与《守恒定律》对广义性能的度量(实质上定义为零)不同,仅当我们的宇宙中具有某些对称属性时,预期的广义性能才会得到保留。先验没有理由相信这种对称性存在。学习算法很可能表现出非零(预期)的泛化性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号