首页> 外文期刊>Neurocomputing >Fusing audio, visual and textual clues for sentiment analysis from multimodal content
【24h】

Fusing audio, visual and textual clues for sentiment analysis from multimodal content

机译:融合音频,视觉和文本线索以从多模式内容中进行情感分析

获取原文
获取原文并翻译 | 示例

摘要

A huge number of videos are posted every day on social media platforms such as Facebook and YouTube. This makes the Internet an unlimited source of information. In the coming decades, coping with such information and mining useful knowledge from it will be an increasingly difficult task. In this paper, we propose a novel methodology for multimodal sentiment analysis, which consists in harvesting sentiments from Web videos by demonstrating a model that uses audio, visual and textual modalities as sources of information. We used both feature- and decision-level fusion methods to merge affective information extracted from multiple modalities. A thorough comparison with existing works in this area is carried out throughout the paper, which demonstrates the novelty of our approach. Preliminary comparative experiments with the YouTube dataset show that the proposed multimodal system achieves an accuracy of nearly 80%, outperforming all state-of-the-art systems by more than 20%. (C) 2015 Elsevier B.V. All rights reserved.
机译:每天都会在Facebook和YouTube等社交媒体平台上发布大量视频。这使Internet成为无限的信息源。在未来几十年中,应对此类信息并从中获取有用的知识将变得越来越困难。在本文中,我们提出了一种用于多模式情感分析的新颖方法,该方法包括通过演示使用音频,视觉和文本模式作为信息源的模型来从Web视频中收集情感。我们同时使用了特征级和决策级融合方法来合并从多种方式中提取的情感信息。整篇论文都与该领域的现有作品进行了全面比较,这证明了我们方法的新颖性。与YouTube数据集进行的初步比较实验表明,所提出的多峰系统的准确率接近80%,比所有最新系统高出20%以上。 (C)2015 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号