首页> 外文期刊>Artificial intelligence >Enhanced aspect-based sentiment analysis models with progressive self-supervised attention learning
【24h】

Enhanced aspect-based sentiment analysis models with progressive self-supervised attention learning

机译:增强的基于宽高的情绪分析模型,具有逐步自我监督的注意力学习

获取原文
获取原文并翻译 | 示例
           

摘要

In aspect-based sentiment analysis (ABSA), many neural models are equipped with an attention mechanism to quantify the contribution of each context word to sentiment prediction. However, such a mechanism suffers from one drawback: only a few frequent words with sentiment polarities are tended to be taken into consideration for final sentiment decision while abundant infrequent sentiment words are ignored by models. To deal with this issue, we propose a progressive self-supervised attention learning approach for attentional ABSA models. In this approach, we iteratively perform sentiment prediction on all training instances, and continually learn useful attention supervision information in the meantime. During training, at each iteration, context words with the highest impact on sentiment prediction, identified based on their attention weights or gradients, are extracted as words with active/misleading influence on the correct/incorrect prediction for each instance. Words extracted in this way are masked for subsequent iterations. To exploit these extracted words for refining ABSA models, we augment the conventional training objective with a regularization term that encourages ABSA models to not only take full advantage of the extracted active context words but also decrease the weights of those misleading words. We integrate the proposed approach into three state-of-the-art neural ABSA models. Experiment results and in-depth analyses show that our approach yields better attention results and significantly enhances the performance of all three models.
机译:在基于方面的情绪分析(ABSA)中,许多神经模型配备有注意机制,以量化每个上下文词对情绪预测的贡献。然而,这种机制遭受一个缺点:只考虑到最终情感决定的少数常见的话语,往往考虑到最终的情绪决定,而模型忽略了丰富的不常见的情绪。要处理这个问题,我们提出了一种逐步自我监督的注意力学习方法,即注意到ABSA模型。在这种方法中,我们迭代地对所有培训实例进行情感预测,并在此期间不断地学习有用的监督信息。在培训期间,在每次迭代时,基于注意力或梯度识别的对情绪预测的最高影响的上下文单词被提取为具有主动/误导性对每个实例对正确/不正确预测的影响的单词。以这种方式提取的单词被屏蔽以用于后续迭代。为了利用这些提取的单词来精炼ABSA模型,我们将传统的培训目标增强了与正则化术语,以鼓励ABSA模型不仅可以充分利用提取的主动上下文词,而且还减少了这些误导性词语的权重。我们将建议的方法整合到三个最先进的神经ABSA模型中。实验结果和深入分析表明,我们的方法会产生更好的关注结果,并显着提高了所有三种模型的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号