首页> 外文会议>IEEE International Symposium on Information Theory >Learning under Distribution Mismatch and Model Misspecification
【24h】

Learning under Distribution Mismatch and Model Misspecification

机译:在分发不匹配和模型拼盘下学习

获取原文

摘要

We study learning algorithms when there is a mismatch between the distributions of the training and test datasets of a learning algorithm. The effect of this mismatch on the generalization error and model misspecification are quantified. Moreover, we provide a connection between the generalization error and the rate-distortion theory, which allows one to utilize bounds from the rate-distortion theory to derive new bounds on the generalization error and vice versa. In particular, the rate-distortion-based bound strictly improves over the earlier bound by Xu and Raginsky even when there is no mismatch. We also discuss how “auxiliary loss functions” can be utilized to obtain upper bounds on the generalization error. A full version of this paper is accessible at [1].
机译:我们在学习算法的训练分布与测试数据集之间存在不匹配时,研究学习算法。 量化了这种不匹配对泛化误差和模型误操作的影响。 此外,我们提供泛化误差和速率失真理论之间的连接,这允许一个人利用来自速率失真理论的边界来导出泛化误差上的新界限,反之亦然。 特别地,即使没有错配,基于速率失真的界限严格地改善了xu和raginsky的较早限制。 我们还讨论如何利用“辅助损耗函数”来获得泛化误差的上限。 本文的完整版本可用于[1]。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号