【24h】

Saliency Learning: Teaching the Model Where to Pay Attention

机译:显着性学习:教学模型应注意的地方

获取原文

摘要

Deep learning has emerged as a compelling solution to many NLP tasks with remarkable performances. However, due to their opacity, such models are hard to interpret and trust. Recent work on explaining deep models has introduced approaches to provide insights toward the model's behaviour and predictions, which are helpful for assessing the reliability of the model's predictions. However, such methods do not improve the model's reliability. In this paper, we aim to teach the model to make the right prediction for the right reason by providing explanation training and ensuring the alignment of the model's explanation with the ground truth explanation. Our experimental results on multiple tasks and datasets demonstrate the effectiveness of the proposed method, which produces more reliable predictions while delivering belter results compared to traditionally trained models.
机译:深度学习已经成为许多具有出色性能的NLP任务的引人注目的解决方案。但是,由于它们的不透明性,这些模型很难解释和信任。关于解释深层模型的最新工作已经引入了一些方法,以提供对模型行为和预测的见解,这有助于评估模型预测的可靠性。但是,这样的方法不能提高模型的可靠性。在本文中,我们旨在通过提供解释训练并确保模型的解释与地面实况解释的一致性来教导模型为正确的原因做出正确的预测。我们在多个任务和数据集上的实验结果证明了该方法的有效性,与传统训练的模型相比,该方法在提供更可靠的结果的同时产生了更可靠的预测。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号