首页> 外文会议>ACM international conference on multimodal interaction >Using Self-Context for Multimodal Detection of Head Nods in Face-to-Face Interactions
【24h】

Using Self-Context for Multimodal Detection of Head Nods in Face-to-Face Interactions

机译:使用自上下文进行面对面互动中头部点头的多模式检测

获取原文

摘要

Head nods occur in virtually every face-to-face discussion. As part of the backchannel domain, they are not only used to express a 'yes', but also to display interest or enhance communicative attention. Detecting head nods in natural interactions is a challenging task as head nods can be subtle, both in amplitude and duration. In this study, we make use of findings in psychology establishing that the dynamics of head gestures are conditioned on the person's speaking status. We develop a multimodal method using audio-based sell-context to detect head nods in natural settings. We demonstrate that our multimodal approach using the speaking status of the person under analysis significantly improved the detection rate over a visual-only approach.
机译:几乎每一次面对面的讨论中都会出现头点头。作为反向通道域的一部分,它们不仅用于表示“是”,还用于表示兴趣或增强交流注意。在自然互动中检测头部点头是一项具有挑战性的任务,因为头部点头在幅度和持续时间上都可能很微妙。在这项研究中,我们利用心理学上的发现来建立头部手势的动力取决于人的说话状态。我们开发了一种多模式方法,使用基于音频的销售上下文来检测自然环境中的头部点头。我们证明了我们的多模态方法使用了被分析者的说话状态,与仅视觉方法相比,显着提高了检测率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号