首页> 外文会议>Linguistic annotation workshop >Focus Annotation of Task-based Data: Establishing the Quality of Crowd Annotation
【24h】

Focus Annotation of Task-based Data: Establishing the Quality of Crowd Annotation

机译:关注基于任务数据的注释:建立人群注释的质量

获取原文

摘要

We explore the annotation of information structure in German and compare the quality of expert annotation with crowd-sourced annotation taking into account the cost of reaching crowd consensus. Concretely, we discuss a crowd-sourcing effort annotating focus in a task-based corpus of German containing reading comprehension questions and answers. Against the backdrop of a gold standard reference resulting from adjudicated expert annotation, we evaluate a crowd sourcing experiment using majority voting to determine a baseline performance. To refine the crowd-sourcing setup, we introduce the Consensus Cost as a measure of agreement within the crowd. We investigate the usefulness of Consensus Cost as a measure of crowd annotation quality both intrinsically, in relation to the expert gold standard, and extrinsically, by integrating focus annotation information into a system performing Short Answer Assessment taking into account the Consensus Cost. We find that low Consensus Cost in crowd sourcing indicates high quality, though high cost does not necessarily indicate low accuracy but increased variability. Overall, taking Consensus Cost into account improves both intrinsic and extrinsic evaluation measures.
机译:我们探讨了德语中信息结构的注释,并比较了人群批评的专家注释的质量,同时考虑到达成人群共识的成本。具体而言,我们讨论了一项人群采购努力,注释了德国德国德语的基于任务的语料库的焦点。在裁决专家注释所产生的黄金标准参考背景下,我们使用多数投票评估了一定的人群采购实验,以确定基线性能。为了改进人群采购的设置,我们将介绍人群内协商衡量标准的共识成本。我们调查共识成本的有用性人群标注质量的同时测量本质,相对于专家的黄金标准,外在和由焦点注释信息纳入执行简答评估考虑到成本共识的系统。我们发现人群采购中的低共识成本表明了高质量,尽管高成本不一定表示低准确性,但可变异性增加。总体而言,达成共识的成本来提高内在和外在评估措施。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号