【24h】

Multimodal Engagement Classification for Affective Cinema

机译:情感电影的多式联合参与分类

获取原文

摘要

This paper describes a multimodal approach to detect viewers' engagement through psycho-physiological affective signals. We investigate the individual contributions of the different modalities, and report experimental results obtained using several fusion strategies, in both per-clip and per-subject cross-validation settings. A sequence of clips from a short movie was showed to 15 participants, from whom we collected per-clip engagement self-assessments. Cues of the users' affective states were collected by means of (i) galvanic skin response (GSR), (ii) automatic facial tracking, and (iii) electroencephalogram(EEG) signals. The main findings of this study can be summarized as follows: (i) each individual modality significantly encodes the level of engagement of the viewers in response to movie clips, (ii) the GSR and EEG signals provide comparable contributions, and (iii) the best performance is obtained when the three modalities are used together.
机译:本文介绍了一种通过心理生理情感信号检测观众啮合的多模式方法。我们调查了不同模式的个性贡献,并在每剪辑和每个主题交叉验证设置中报告使用多种融合策略获得的实验结果。一部短片的一系列剪辑被展示为15名参与者,我们从谁收集了每剪辑接触自我评估。通过(i)电流皮肤响应(GSR),(ii)自动面部跟踪和(iii)脑电图(EEG)信号来收集用户情感状态的提示。本研究的主要结果可以概括如下:(i)每个单独的方式显着编码观众的接合水平响应于电影剪辑,(ii)GSR和EEG信号提供可比贡献,以及(iii)当三种模式一起使用时获得最佳性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号