【24h】

Automatic gaze analysis in multiparty conversations based on Collective First-Person Vision

机译:基于集体第一人称视角的多方对话中的自动注视分析

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

This paper extends the affective computing research field by introducing first-person vision to automatic conversation analysis. We target medium-sized-party face-to-face conversations where each person wears inward-looking and outward-looking cameras. We demonstrate that the fundamental techniques required for group gaze analysis, i.e. speaker detection, face tracking, and gaze estimation, can be accurately and effectively performed via self-training in a unified framework by gathering captured audio-visual signals to a centralized system and using a general conversation rule, i.e. listeners look mainly at the speaker. We visualize the characteristics of participants' gaze behavior as a gazee-centered heat map, which quantitatively reveals what parts of the gazee's body and for how long the participant looked at it while the gazer speaks or listens. An experiment involving two groups of six-person conversations demonstrates the potential of the proposed framework.
机译:通过将第一人称视角引入自动会话分析中,本文扩展了情感计算的研究领域。我们针对中型聚会进行面对面的对话,其中每个人都戴着向内和向外的相机。我们证明,通过将捕获的视听信号收集到集中式系统并使用统一的框架中的自训练,可以准确有效地执行群体注视分析所需的基本技术,即说话者检测,面部跟踪和注视估计。一般的交谈规则,即听众主要看说话者。我们以以瞪羚为中心的热图可视化参与者的凝视行为的特征,该图定量地显示了瞪羚的身体的哪些部分以及参与者在凝视者讲话或聆听时看着它的时间。涉及两组六人对话的实验证明了所提出框架的潜力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号