首页> 外文会议>IEEE International Conference on Multimedia and Expo >Select-additive learning: Improving generalization in multimodal sentiment analysis
【24h】

Select-additive learning: Improving generalization in multimodal sentiment analysis

机译:选择加性学习:改进多模态情感分析中的泛化

获取原文

摘要

Multimodal sentiment analysis is drawing an increasing amount of attention these days. It enables mining of opinions in video reviews which are now available aplenty on online platforms. However, multimodal sentiment analysis has only a few high-quality data sets annotated for training machine learning algorithms. These limited resources restrict the generalizability of models, where, for example, the unique characteristics of a few speakers (e.g., wearing glasses) may become a confounding factor for the sentiment classification task. In this paper, we propose a Select-Additive Learning (SAL) procedure that improves the generalizability of trained neural networks for multimodal sentiment analysis. In our experiments, we show that our SAL approach improves prediction accuracy significantly in all three modalities (verbal, acoustic, visual), as well as in their fusion. Our results show that SAL, even when trained on one dataset, achieves good generalization across two new test datasets.
机译:如今,多模式情绪分析越来越引起人们的关注。它可以挖掘视频评论中的观点,现在可以在在线平台上大量获取这些评论。但是,多模态情感分析只有少数高质量的数据集用于训练机器学习算法。这些有限的资源限制了模型的通用性,例如,少数说话者(例如,戴着眼镜)的独特特征可能会成为情感分类任务的困扰因素。在本文中,我们提出了一种选择加性学习(SAL)程序,该程序可提高训练有素的神经网络在多模态情感分析中的通用性。在我们的实验中,我们证明了我们的SAL方法在所有三种模式(语言,声学,视觉)及其融合中均显着提高了预测准确性。我们的结果表明,即使在一个数据集上进行训练时,SAL也可以在两个新的测试数据集上实现良好的概括。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号