首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Progressive Self-Supervised Attention Learning for Aspect-Level Sentiment Analysis
【24h】

Progressive Self-Supervised Attention Learning for Aspect-Level Sentiment Analysis

机译:用于方面情感分析的渐进式自我监督注意力学习

获取原文

摘要

In aspect-level sentiment classification (ASC), it is prevalent to equip dominant neural models with attention mechanisms, for the sake of acquiring the importance of each context word on the given aspect. However, such a mechanism tends to excessively focus on a few frequent words with sentiment polarities, while ignoring infrequent ones. In this paper, we propose a progressive self-supervised attention learning approach for neural ASC models, which automatically mines useful attention supervision information from a training corpus to refine attention mechanisms. Specifically, we iteratively conduct sentiment predictions on all training instances. Particularly, at each iteration, the context word with the maximum attention weight is extracted as the one with active/misleading influence on the correct/incorrect prediction of every instance, and then the word itself is masked for subsequent iterations. Finally, we augment the conventional training objective with a regularization term, which enables ASC models to continue equally focusing on the extracted active context words while decreasing weights of those misleading ones. Experimental results on multiple datasets show that our proposed approach yields better attention mechanisms, leading to substantial improvements over the two state-of-the-art neural ASC models. Source code and trained models are available.'
机译:在方面级别的情感分类(ASC)中,普遍的做法是为优势神经模型配备注意机制,以便获得给定方面每个上下文单词的重要性。但是,这种机制倾向于过分关注具有情感极性的一些频繁单词,而忽略了很少出现的单词。在本文中,我们提出了一种针对神经ASC模型的渐进式自我监督注意力学习方法,该方法会自动从训练语料库中挖掘有用的注意力监督信息,以完善注意力机制。具体来说,我们迭代地对所有训练实例进行情绪预测。特别地,在每次迭代中,将具有最大注意力权重的上下文词提取为对每个实例的正确/不正确预测具有积极/误导性影响的上下文词,然后将该词本身掩盖以用于后续迭代。最后,我们用正则化项扩展了常规的训练目标,这使ASC模型可以继续平等地关注提取的活动上下文词,同时减少这些误导性词的权重。在多个数据集上的实验结果表明,我们提出的方法产生了更好的注意力机制,从而大大改进了两个最新的神经ASC模型。源代码和训练有素的模型可用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号