首页> 外文会议>International Conference on Machine Learning >Interpretations are Useful: Penalizing Explanations to Align Neural Networks with Prior Knowledge
【24h】

Interpretations are Useful: Penalizing Explanations to Align Neural Networks with Prior Knowledge

机译:解释是有用的:惩罚解释与先前知识的神经网络

获取原文

摘要

For an explanation of a deep learning model to be effective, it must both provide insight into a model and suggest a corresponding action in order to achieve an objective. Too often, the litany of proposed explainable deep learning methods stop at the first step, providing practitioners with insight into a model, but no way to act on it. In this paper we propose contextual decomposition explanation penalization (CDEP), a method that enables practitioners to leverage explanations to improve the performance of a deep learning model. In particular, CDEP enables inserting domain knowledge into a model to ignore spurious correlations, correct errors, and generalize to different types of dataset shifts. We demonstrate the ability of CDEP to increase performance on an array of toy and real datasets.
机译:为了解释深度学习模型要有效,它必须对模型提供洞察,并建议相应的动作以实现目标。 往往经常,提出的拟议解释的深度学习方法的十一步在第一步停止,为从业者提供了深入了解模型,但没有办法采取行动。 在本文中,我们提出了语境分解解释惩罚(CDEP),一种使从业者能够利用解释来提高深度学习模型的性能的方法。 特别是,CDEP使将域知识插入模型中以忽略虚假相关性,正确的错误,并概括不同类型的数据集班次。 我们展示了CDEP在玩具和实时数据集中提高性能的能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号