首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition >Emotional Attention: A Study of Image Sentiment and Visual Attention
【24h】

Emotional Attention: A Study of Image Sentiment and Visual Attention

机译:情绪注意:图像情感和视觉注意的研究

获取原文

摘要

Image sentiment influences visual perception. Emotion-eliciting stimuli such as happy faces and poisonous snakes are generally prioritized in human attention. However, little research has evaluated the interrelationships of image sentiment and visual saliency. In this paper, we present the first study to focus on the relation between emotional properties of an image and visual attention. We first create the EMOtional attention dataset (EMOd). It is a diverse set of emotion-eliciting images, and each image has (1) eye-tracking data collected from 16 subjects, (2) intensive image context labels including object contour, object sentiment, object semantic category, and high-level perceptual attributes such as image aesthetics and elicited emotions. We perform extensive analyses on EMOd to identify how image sentiment relates to human attention. We discover an emotion prioritization effect: for our images, emotion-eliciting content attracts human attention strongly, but such advantage diminishes dramatically after initial fixation. Aiming to model the human emotion prioritization computationally, we design a deep neural network for saliency prediction, which includes a novel subnetwork that learns the spatial and semantic context of the image scene. The proposed network outperforms the state-of-the-art on three benchmark datasets, by effectively capturing the relative importance of human attention within an image. The code, models, and dataset are available online at https:/us-sesame.top/emotionalattention/.
机译:图像情感会影响视觉感知。人们通常优先考虑诱发情绪的刺激,例如笑脸和毒蛇。但是,很少有研究评估图像情感和视觉显着性的相互关系。在本文中,我们提出了第一个重点研究图像情感属性与视觉注意之间关系的研究。我们首先创建EMOtional注意数据集(EMOd)。它是一组引起情绪变化的图像,每个图像具有(1)从16个对象收集的眼动数据,(2)密集的图像上下文标签,包括对象轮廓,对象情​​感,对象语义类别和高级感知属性,例如图像美学和诱发的情感。我们对EMOd进行了广泛的分析,以确定图像情感与人类注意力之间的关系。我们发现了情感优先排序的效果:对于我们的图像,引发情感的内容强烈吸引了人类的注意力,但是这种优势在最初注视后会急剧下降。为了对人类情感优先级进行计算建模,我们设计了一个用于显着性预测的深层神经网络,其中包括一个学习图像场景的空间和语义环境的新型子网。通过有效地捕捉图像中人的注意力的相对重要性,所提出的网络在三个基准数据集上的表现优于最新技术。代码,模型和数据集可从https://nus-sesame.top/emotionalattention/在线获得。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号