首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Does Distributionally Robust Supervised Learning Give Robust Classifiers?
【24h】

Does Distributionally Robust Supervised Learning Give Robust Classifiers?

机译:分布强大的监督学习给予强大的分类器吗?

获取原文
           

摘要

Distributionally Robust Supervised Learning (DRSL) is necessary for building reliable machine learning systems. When machine learning is deployed in the real world, its performance can be significantly degraded because test data may follow a different distribution from training data. DRSL with f-divergences explicitly considers the worst-case distribution shift by minimizing the adversarially reweighted training loss. In this paper, we analyze this DRSL, focusing on the classification scenario. Since the DRSL is explicitly formulated for a distribution shift scenario, we naturally expect it to give a robust classifier that can aggressively handle shifted distributions. However, surprisingly, we prove that the DRSL just ends up giving a classifier that exactly fits the given training distribution, which is too pessimistic. This pessimism comes from two sources: the particular losses used in classification and the fact that the variety of distributions to which the DRSL tries to be robust is too wide. Motivated by our analysis, we propose simple DRSL that overcomes this pessimism and empirically demonstrate its effectiveness.
机译:分布稳健的监督学习(DRSL)对于构建可靠的机器学习系统是必要的。当机器学习部署在现实世界中时,它的性能可能会显着降低,因为测试数据可能遵循训练数据的不同分发。 DRSL与F分歧明确地考虑了最大限度地减少了对抗的训练损失来实现最坏情况的分布班。在本文中,我们分析了这个DRSL,专注于分类方案。由于DRSL被明确地制定用于分发换档方案,因此我们自然期望它提供一种强大的分类器,可以积极地处理移位的分布。然而,令人惊讶的是,我们证明了DRSL恰到好处,给出了一个完全适合给定培训分布的分类器,这太悲观了。这种悲观主义来自两个来源:分类中使用的特定损失以及DRSL尝试强大的各种分布的事实太宽。通过我们的分析,我们提出了简单的DRSL,克服了这种悲观主义并经验证明其有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号