首页> 外文会议>International Conference on Machine Learning and Data Engineering >Attention Visualization of Gated Convolutional Neural Networks with Self Attention in Sentiment Analysis
【24h】

Attention Visualization of Gated Convolutional Neural Networks with Self Attention in Sentiment Analysis

机译:情绪分析中具有自注意力的门控卷积神经网络的注意力可视化

获取原文

摘要

Deep learning is applied to many research topics; Natural Language Processing, Image Processing, and Acoustic Recognition. In deep learning, neural networks have a very complex and deep structure and it is difficult to discuss why they work well or not. So you have to take a trial-and-error to improve their performances. We develop a mechanism to show how neural networks predict final results and help you to design a new neural network architecture based on its prediction criteria. Speaking concrete, we visualize important features to predict the final results with an attentional mechanism. In this paper, we take up sentient analysis, which is one of natural language processing tasks. In image processing visualizing weights of a neural network is a major approach and you can obtain intuitive results; object outlines and object components. However, in natural language processing, the approach is not interpretable because a discriminate function constructed by a neural network is a complex and nonlinear one and it is very difficult to correlate weights and words in a text. We employ Gated Convolutional Neural Network (GCNN) and introduce a self-attention mechanism to understand how GCNN determines sentiment polarities from raw reviews. GCNN can simulate an n-gram model and the self-attention mechanism can make correspondence between weights of a neural network and words clear. In experiments, we used Amazon reviews and evaluated the performance of the proposed method. Especially, the proposed method was able to emphasize some words in the review to determine sentiment polarity. Moreover, when the prediction was wrong, we were able to understand why the proposed method made mistakes because we found what words the proposed method emphasized.
机译:深度学习被应用于许多研究主题。自然语言处理,图像处理和声音识别。在深度学习中,神经网络具有非常复杂和深入的结构,因此很难讨论它们为何工作良好。因此,您必须反复试验才能提高其性能。我们开发了一种机制来显示神经网络如何预测最终结果,并帮助您根据其预测标准设计新的神经网络体系结构。说到具体,我们通过注意机制可视化重要功能以预测最终结果。在本文中,我们进行了情感分析,这是自然语言处理任务之一。在图像处理中,可视化神经网络的权重是一种主要方法,您可以获得直观的结果。对象轮廓和对象组成部分。然而,在自然语言处理中,该方法无法解释,因为由神经网络构造的判别函数是一个复杂且非线性的函数,很难关联文本中的权重和单词。我们使用门控卷积神经网络(GCNN)并引入了一种自我关注机制,以了解GCNN如何从原始评论中确定情感极性。 GCNN可以模拟一个n-gram模型,而自我注意机制可以使神经网络的权重与单词之间的对应关系清晰。在实验中,我们使用了亚马逊评论并评估了所提出方法的性能。特别地,所提出的方法能够在评论中强调一些单词以确定情感极性。而且,当预测错误时,我们能够理解所提出的方法为什么会出错,因为我们发现了所提出的方法强调了哪些词。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号