首页> 外文会议>9th International conference on language resources and evaluation >Can the Crowd be Controlled?: A Case Study on Crowd Sourcing and Automatic Validation of Completed Tasks based on User Modeling
【24h】

Can the Crowd be Controlled?: A Case Study on Crowd Sourcing and Automatic Validation of Completed Tasks based on User Modeling

机译:可以控制人群吗?:基于用户建模的人群采购和自动验证完成任务的案例研究

获取原文

摘要

Annotation is an essential step in the development cycle of many Natural Language Processing (NLP) systems. Lately, crowdsourcing has been employed to facilitate large scale annotation at a reduced cost. Unfortunately, verifying the quality of the submitted annotations is a daunting task. Existing approaches address this problem either through sampling or redundancy. However, these approaches do have a cost associated with it Based on the observation that a crowdsourcing worker returns to do a task that he has done previously, a novel framework for automatic validation of crowd-sourced task is proposed in this paper. A case study based on sentiment analysis is presented to elucidate the framework and its feasibility. The result suggests that validation of the crowd-sourced task can be automated to a certain extent.
机译:注释是许多自然语言处理(NLP)系统开发周期中必不可少的步骤。最近,已经采用众包以降低的成本促进大规模注释。不幸的是,验证提交的注释的质量是一项艰巨的任务。现有方法通过采样或冗余解决了这个问题。然而,这些方法确实伴随着成本。基于观察到,一个众包工人返回做他以前已经完成的任务,本文提出了一个新的框架,用于自动验证众包任务。提出了一个基于情感分析的案例研究,以阐明该框架及其可行性。结果表明,众包任务的验证可以在一定程度上实现自动化。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号