首页> 外文会议>ACM international conference on Multimedia >Automated localization of affective objects and actions in images via caption text-cum-eye gaze analysis
【24h】

Automated localization of affective objects and actions in images via caption text-cum-eye gaze analysis

机译:通过字幕文本和眼睛注视分析自动定位图像中的情感对象和动作

获取原文

摘要

We propose a novel framework to localize and label affective objects and actions in images through a combination of text, visual and gaze-based analysis. Human gaze provides useful cues to infer locations and interactions of affective objects. While concepts (labels) associated with an image can be determined from its caption, we demonstrate localization of these concepts upon learning from a statistical affect model for world concepts. The affect model is derived from non-invasively acquired fixation patterns on labeled images, and guides localization of affective objects (faces, reptiles) and actions (look, read) from fixations in unlabeled images. Experimental results obtained on a database of 500 images confirm the effectiveness and promise of the proposed approach.
机译:我们提出了一个新颖的框架,通过结合文本,视觉和基于凝视的分析来定位和标记图像中的情感对象和动作。人的凝视为推断情感对象的位置和相互作用提供了有用的线索。虽然可以从图像标题中确定与图像关联的概念(标签),但我们从世界概念的统计影响模型中学习后就演示了这些概念的本地化。情感模型源自无标记图像上非侵入式获取的注视方式,并指导未标记图像中注视的情感对象(面部,爬行动物)和动作(外观,阅读)的定位。在500张图像的数据库上获得的实验结果证实了该方法的有效性和前景。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号