首页> 外文会议>International conference on human-computer interaction;Symposium on human interface;HCI international 2011 >Multimodal Conversation Scene Analysis for Understanding People's Communicative Behaviors in Face-to-Face Meetings
【24h】

Multimodal Conversation Scene Analysis for Understanding People's Communicative Behaviors in Face-to-Face Meetings

机译:多模式对话场景分析,用于了解面对面会议中人们的交流行为

获取原文

摘要

This presentation overviews our recent progress in multimodal conversation scene analysis, and discusses its future in terms of designing better human-to-human communication systems. Conversation scene analysis aims to provide the automatic description of conversation scenes from the multimodal nonverbal behaviors of participants as captured by cameras and microphones. So far, the author's group has proposed a research framework based on the probabilistic modeling of conversation phenomena for solving several basic problems including speaker diarization, i.e. "who is speaking when", addressee identification, i.e. "who is talking to whom", interaction structure, i.e. "who is responding to whom", the estimation of visual focus of attention (VFOA), i.e. "who is looking at whom", and the inference of interpersonal emotion such as "who has empathy/antipathy with whom", from observed multimodal behaviors including utterances, head pose, head gestures, eye-gaze, and facial expressions. This paper overviews our approach and discusses how conversation scene analysis can be extended to enhance the design process of computer-mediated communication systems.
机译:本演示文稿概述了我们在多模式对话场景分析方面的最新进展,并讨论了在设计更好的人对人通信系统方面的未来。对话场景分析旨在根据相机和麦克风捕获的参与者的多模态非语言行为提供对对话场景的自动描述。到目前为止,作者团队已经提出了一种基于对话现象概率模型的研究框架,用于解决一些基本问题,包括说话人歧义化,即“谁在说话的时间”,收件人识别,即“谁在和谁说话的人”,交互结构,即“谁在对谁做出反应”,视觉关注焦点(VFOA)的估计(即“谁在看着谁”)以及人际情感的推论,例如“谁对谁产生同情/反感”多模态行为,包括话语,头部姿势,头部手势,视线和面部表情。本文概述了我们的方法,并讨论了如何扩展对话场景分析以增强计算机介导的通信系统的设计过程。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号