首页> 外文期刊>Knowledge-Based Systems >Visual-textual sentiment classification with bi-directional multi-level attention networks
【24h】

Visual-textual sentiment classification with bi-directional multi-level attention networks

机译:具有双向多级关注网络的视觉文本情绪分类

获取原文
获取原文并翻译 | 示例

摘要

Social network has become an inseparable part of our daily lives and thus the automatic sentiment analysis on social media content is of great significance to identify people's viewpoints, attitudes, and emotions on the social websites. Most existing works have concentrated on the sentiment analysis of single modality such as image or text, which cannot handle the social media content with multiple modalities including both image and text. Although some works tried to conduct multi modal sentiment analysis, the complicated correlations between the two modalities have not been fully explored. In this paper, we propose a novel Bi-Directional Multi-Level Attention (BDMLA) model to exploit the complementary and comprehensive information between the image modality and text modality for joint visual-textual sentiment classification. Specifically, to highlight the emotional regions and words in the image-text pair, visual attention network and semantic attention network are proposed respectively. The visual attention network makes region features of the image interact with multiple semantic levels of text (word, phrase, and sentence) to obtain the attended visual features. The semantic attention network makes semantic features of the text interact with multiple visual levels of image (global and local) to obtain the attended semantic features. Then, the attended visual and semantic features from the two attention networks are unified into a holistic framework to conduct visual-textual sentiment classification. Proof-of-concept experiments conducted on three real-world datasets verify the effectiveness of our model. (C) 2019 Elsevier B.V. All rights reserved.
机译:社交网络已成为我们日常生活的不可分割的部分,因此对社交媒体内容的自动情感分析具有重要意义,以识别人们对社会网站的观点,态度和情感。大多数现有工程都集中在单个模态的情绪分​​析(如图像或文本),它无法处理社交媒体内容,其中包括图像和文本包括多种模态。虽然有些作品试图进行多种模式情绪分析,但两种方式之间的复杂相关性尚未完全探索。在本文中,我们提出了一种新颖的双向多层次关注(BDMLA)模型,用于利用用于关节视觉文本情绪分类的图像模态和文本模式之间的互补和综合信息。具体而言,为了突出显示图像文本对中的情绪区域和单词,分别提出了视觉关注网络和语义关注网络。视觉注意网络使图像的区域特征与多个语义级别的文本(单词,短语和句子)相互作用,以获得上次的视觉功能。语义关注网络使文本的语义特征与多种视觉级别的图像(全局和本地)进行交互,以获得上次的语义特征。然后,来自两个注意网络的参与的视觉和语义特征统一进入整体框架,以进行视觉文本情绪分类。在三次现实世界数据集中进行的概念证明实验验证了我们模型的有效性。 (c)2019 Elsevier B.v.保留所有权利。

著录项

  • 来源
    《Knowledge-Based Systems》 |2019年第15期|61-73|共13页
  • 作者单位

    Beihang Univ Sch Comp Sci & Engn State Key Lab Software Dev Environm Beijing 100191 Peoples R China;

    Jinan Univ Coll Cyber Secur Guangzhou 510632 Guangdong Peoples R China|Jinan Univ Coll Informat Sci & Technol Guangzhou 510632 Guangdong Peoples R China|Guangdong Key Lab Data Secur & Privacy Preserving Guangzhou 510632 Guangdong Peoples R China;

    Beihang Univ Sch Cyber Sci & Technol Beijing 100191 Peoples R China;

    Nanjing Univ Aeronaut & Astronaut Sch Comp Sci & Technol Nanjing 210016 Jiangsu Peoples R China;

    Beihang Univ Sch Comp Sci & Engn State Key Lab Software Dev Environm Beijing 100191 Peoples R China;

    Beihang Univ Sch Comp Sci & Engn State Key Lab Software Dev Environm Beijing 100191 Peoples R China;

    Coordinat Ctr China Natl Comp Network Emergency Response Tech Team Beijing 100029 Peoples R China;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Multi-modal; Social image; Attention model; Sentiment analysis;

    机译:多模态;社会形象;注意模型;情感分析;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号