首页> 外文期刊>Language Resources and Evaluation >The JESTKOD database: an affective multimodal database of dyadic interactions
【24h】

The JESTKOD database: an affective multimodal database of dyadic interactions

机译:JESTKOD数据库:二元互动的情感多峰数据库

获取原文
获取原文并翻译 | 示例
           

摘要

In human-to-human communication, gesture and speech co-exist in time with a tight synchrony, and gestures are often utilized to complement or to emphasize speech. In human-computer interaction systems, natural, affective and believable use of gestures would be a valuable key component in adopting and emphasizing human-centered aspects. However, natural and affective multimodal data, for studying computational models of gesture and speech, is limited. In this study, we introduce the JESTKOD database, which consists of speech and full-body motion capture data recordings in dyadic interaction setting under agreement and disagreement scenarios. Participants of the dyadic interactions are native Turkish speakers and recordings of each participant are rated in dimensional affect space. We present our multimodal data collection and annotation process, as well as our preliminary experimental studies on agreement/disagreement classification of dyadic interactions using body gesture and speech data. The JESTKOD database provides a valuable asset to investigate gesture and speech towards designing more natural and affective human-computer interaction systems.
机译:在人与人之间的通信中,手势和语音在时间上紧密并存,并且经常使用手势来补充或强调语音。在人机交互系统中,自然,情感和可信的手势使用将是采用和强调以人为中心的方面的重要关键组成部分。但是,用于研究手势和语音的计算模型的自然和情感多峰数据是有限的。在这项研究中,我们介绍了JESTKOD数据库,该数据库由在双方同意和不同意见的情况下的双向交互设置下的语音和全身运动捕获数据记录组成。双向互动的参与者是土耳其语母语,并且每个参与者的录音在尺寸影响空间中得到评级。我们介绍了我们的多模式数据收集和注释过程,以及使用身体手势和语音数据对二元互动的同意/不同意分类进行的初步实验研究。 JESTKOD数据库提供了宝贵的资产,可用于研究手势和语音,以设计更自然,更具情感的人机交互系统。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号