首页> 外文会议>International workshop on human behavior understanding >Using the Audio Respiration Signal for Multimodal Discrimination of Expressive Movement Qualities
【24h】

Using the Audio Respiration Signal for Multimodal Discrimination of Expressive Movement Qualities

机译:使用音频呼吸信号进行表现运动质量的多模态识别

获取原文

摘要

In this paper we propose a multimodal approach to distinguish between movements displaying three different expressive qualities: fluid, fragmented, and impulsive movements. Our approach is based on the Event Synchronization algorithm, which is applied to compute the amount of synchronization between two low-level features extracted from multimodal data. In more details, we use the energy of the audio respiration signal captured by a standard microphone placed near to the mouth, and the whole body kinetic energy estimated from motion capture data. The method was evaluated on 90 movement segments performed by 5 dancers. Results show that fragmented movements display higher average synchronization than fluid and impulsive movements.
机译:在本文中,我们提出了一种多模态方法,以区分显示三种不同表达质量的运动:流体运动,碎片运动和冲动运动。我们的方法基于事件同步算法,该算法用于计算从多峰数据中提取的两个低级特征之间的同步量。更详细地讲,我们使用放置在嘴附近的标准麦克风捕获的音频呼吸信号的能量,并根据运动捕获数据估算整个身体的动能。该方法在5位舞者进行的90个运动段上进行了评估。结果表明,零散运动比流体运动和脉冲运动显示出更高的平均同步性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号