首页> 外文期刊>Neural computation >Algorithmic Stability and Sanity-Check Bounds for Leave-One-Out Cross-Validation
【24h】

Algorithmic Stability and Sanity-Check Bounds for Leave-One-Out Cross-Validation

机译:留一法交叉验证的算法稳定性和合理性检查界限

获取原文
获取原文并翻译 | 示例
       

摘要

In this article we prove sanity-check bounds for the error of the leave-one- out cross-validation estimate of the generalization error: that is, bounds showing that the worst-case error of this estimate is not much worse than that of the training error estimate. The name sanity check refers to the fact that although we often expect the leave-one-out estimate to perform considerably better than the training error estimate, we are here only seeking assurance that the training error estimate, we are here only seeking assurance that its performance will not be considerably worse. Perhaps surprisingly, such assurance has been given only for limited cases In the prior literature on cross-validation.
机译:在本文中,我们证明了对泛化误差的留一法交叉验证估计的误差的健全性检查边界:即,边界表明该估计的最坏情况误差并不比对误差的最坏情况差很多。训练误差估计。健全性检查这一名称是指这样一个事实,尽管我们经常期望一劳永逸的估计要比训练误差的估计要好得多,但是我们在这里只是寻求保证训练误差的估计,而在这里我们只是寻求保证它的误差。性能不会明显变差。可能令人惊讶的是,这种保证仅在有关交叉验证的现有文献中仅针对有限的情况给出。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号