【24h】

Importance of Self-Attention for Sentiment Analysis

机译:情绪分析中自我注意的重要性

获取原文
获取原文并翻译 | 示例

摘要

Despite their superior performance, deep learning models often lack interpretability. In this paper, we explore the modeling of insightful relations between words, in order to understand and enhance predictions. To this effect, we propose the Self-Attention Network (SANet), a flexible and interpretable architecture for text classification. Experiments indicate that gains obtained by self-attention is task-dependent. For instance, experiments on sentiment analysis tasks showed an improvement of around 2% when using self-attention compared to a baseline without attention, while topic classification showed no gain. Interpretability brought forward by our architecture highlighted the importance of neighboring word interactions to extract sentiment.
机译:尽管深度学习模型具有卓越的性能,但它们通常缺乏可解释性。在本文中,我们探索了单词之间有见地的关系的建模,以便理解和增强预测。为此,我们提出了自我注意网络(SANet),它是一种灵活且可解释的文本分类体系结构。实验表明,通过自我注意力获得的收益取决于任务。例如,情感分析任务的实验显示,与没有注意力的基线相比,使用自我注意力时的情绪提高了约2%,而主题分类却没有增加。我们的体系结构提出的可解释性突出了相邻单词交互对提取情感的重要性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号