首页> 外文会议>International Conference on Pattern Recognition >AttendAffectNet: Self-Attention based Networks for Predicting Affective Responses from Movies
【24h】

AttendAffectNet: Self-Attention based Networks for Predicting Affective Responses from Movies

机译:迎员参与者:基于自我关注的网络,用于预测电影的情感响应

获取原文

摘要

In this work, we propose different variants of the self-attention based network for emotion prediction from movies, which we call AttendAffectNet. We take both audio and video into account and incorporate the relation among multiple modalities by applying self-attention mechanism in a novel manner into the extracted features for emotion prediction. We compare it to the typically temporal integration of the self-attention based model, which in our case, allows to capture the relation of temporal representations of the movie while considering the sequential dependencies of emotion responses. We demonstrate the effectiveness of our proposed architectures on the extended COGNIMUSE dataset [1], [2] and the MediaEval 2016 Emotional Impact of Movies Task [3], which consist of movies with emotion annotations. Our results show that applying the self-attention mechanism on the different audio-visual features, rather than in the time domain, is more effective for emotion prediction. Our approach is also proven to outperform many state-of-the-art models for emotion prediction. The code to reproduce our results with the models' implementation is available at: https://github.com/ivyha010/AttendAffectNet.
机译:在这项工作中,我们提出了从我们称之为签到的电影的情感预测的自我关注网络的不同变体。我们考虑了音频和视频,通过将自我关注机制以新颖的方式应用于情感预测的提取特征来结合多种方式之间的关系。我们将其与基于自我关注的模型的典型时间集成进行了比较,在我们的情况下,允许捕获电影的时间表示关系的关系,同时考虑情绪响应的顺序依赖性。我们展示了我们拟议的架构对扩展Cognimust DataSet [1],[2]和Mediaeval 2016情感影响的效果[3],它由带情感注释的电影组成。我们的结果表明,在不同的视听功能上应用自我关注机制,而不是在时域中更有效。我们的方法也被证明优于许多最先进的情感预测模型。通过模型实现重现我们的结果的代码可用于:https://github.com/ivyha010/AttendaffectNet。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号