首页> 外文期刊>Journal on multimodal user interfaces >Inter-rater reliability for emotion annotation in human-computer interaction: comparison and methodological improvements
【24h】

Inter-rater reliability for emotion annotation in human-computer interaction: comparison and methodological improvements

机译:人机交互中情感注释的评分者间可靠性:比较和方法改进

获取原文
获取原文并翻译 | 示例

摘要

To enable a naturalistic human-computer interaction the recognition of emotions and intentions experiences increased attention and several modalities are comprised to cover all human communication abilities. For this reason, naturalistic material is recorded, where the subjects are guided through an interaction with crucial points, but with the freedom to react individually. This material captures realistic user reactions but lacks of clear labels. So, a good transcription and annotation of the given material is essential. For that, the assignment of human annotators has become widely accepted. A good measurement for the reliability of labelled material is the inter-rater agreement. In this paper we investigate the achieved inter-rater agreement utilizing Krippendorff's alpha for emotional annotated interaction corpora and present methods to improve the reliability, we show that the reliabilities obtained with different methods does not differ much, so a choice could rely on other aspects. Furthermore, a multimodal presentation of the items in their natural order increases the reliability.
机译:为了实现自然的人机交互,情感和意图的识别需要引起更多的关注,并且包括多种模式以涵盖所有人类交流能力。因此,记录了自然主义的材料,通过与关键点的交互作用引导对象,但可以自由地进行单独反应。该材料可以捕捉用户的实际反应,但缺乏清晰的标签。因此,对给定材料进行良好的转录和注释至关重要。为此,人工注释者的分配已被广泛接受。评分者之间的协议是衡量带标签材料可靠性的一个好方法。在本文中,我们研究了使用Krippendorff的alpha进行情感注释的交互语料库所达成的评分者之间的协议,并提出了提高信度的方法,表明使用不同方法获得的可靠性没有太大差异,因此选择可以依赖其他方面。此外,以其自然顺序对商品进行多模式展示可以提高可靠性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号