首页> 外文会议>IEEE International Conference on Acoustics, Speech and Signal Processing >Agreement and disagreement classification of dyadic interactions using vocal and gestural cues
【24h】

Agreement and disagreement classification of dyadic interactions using vocal and gestural cues

机译:使用声音和手势提示对二元互动进行同意和不同意分类

获取原文

摘要

In human-to-human communication gesture and speech co-exist in time with a tight synchrony, where we tend to use gestures to complement or to emphasize speech. In this study, we investigate roles of vocal and gestural cues to identify a dyadic interaction as agreement and disagreement. In this investigation we use the JESTKOD database, which consists of speech and full-body motion capture data recordings for dyadic interactions under agreement and disagreement scenarios. Spectral features of vocal channel and upper body joint angles of gestural channel are employed to extract unimodal and multimodal classification performances. Both of the modalities attain classification rates significantly above the chance level and the multimodal classifier performed more than 80% classification rate over 15 second utterances using statistical features of speech and motion.
机译:在人与人之间的交流中,手势和语音在时间上紧密并存,我们倾向于使用手势来补充或强调语音。在这项研究中,我们调查了语音和手势提示的作用,以确定二元互动是一致还是不一致。在这项调查中,我们使用JESTKOD数据库,该数据库由语音和全身运动捕获数据记录组成,用于在一致和不同意的情况下进行双向交互。语音通道的频谱特征和手势通道的上身关节角度被用于提取单峰和多峰分类性能。两种模态均获得明显高于机会水平的分类率,并且多模态分类器使用语音和运动的统计特征在15秒的发声中执行了80%以上的分类率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号