【24h】

Saliency Learning: Teaching the Model Where to Pay Attention

机译:显着性学习:教导模型在哪里注意

获取原文

摘要

Deep learning has emerged as a compelling solution to many NLP tasks with remarkable performances. However, due to their opacity, such models are hard to interpret and trust. Recent work on explaining deep models has introduced approaches to provide insights toward the model's behaviour and predictions, which are helpful for assessing the reliability of the model's predictions. However, such methods do not improve the model's reliability. In this paper, we aim to teach the model to make the right prediction for the right reason by providing explanation training and ensuring the alignment of the model's explanation with the ground truth explanation. Our experimental results on multiple tasks and datasets demonstrate the effectiveness of the proposed method, which produces more reliable predictions while delivering belter results compared to traditionally trained models.
机译:深度学习已成为许多具有显着性能的令人信服的解决方案。然而,由于他们的不透明性,这种模型很难解释和信任。最近解释深层模型的工作引入了对模型的行为和预测的见解,这有助于评估模型预测的可靠性。但是,这些方法不会提高模型的可靠性。在本文中,我们的目标是通过提供解释培训并确保模型与地面真理解释的解释对齐来教导模型以获得正确的预测。我们对多项任务和数据集的实验结果证明了该方法的有效性,这在与传统培训的模型相比,在提供Belter结果的同时产生更可靠的预测。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号