首页> 外文期刊>Computing reviews >An analysis of human factors and label accuracy in crowdsourcing relevance judgments
【24h】

An analysis of human factors and label accuracy in crowdsourcing relevance judgments

机译:众包相关性判断中的人为因素和标签准确性分析

获取原文
获取原文并翻译 | 示例
           

摘要

Labeling is a complex task. This interesting paper investigates the use of crowdsourcing to create labeled data, which involves relevance judgments that are known to be subjective. The authors use a series of experiments using Amazon's Mechanical Turk (AMT) to explore the human characteristics of the crowd involved in a relevance assessment task. The study considers three factors or variables-the payment offered to the assessor, the effort expended by the assessor, and the assessor's qualifications-and examines the effect of these variables on the resulting relevance labels.
机译:加标签是一项复杂的任务。这篇有趣的论文调查了使用众包创建标记数据的过程,该过程涉及已知的主观相关性判断。作者使用Amazon的Mechanical Turk(AMT)进行了一系列实验,以探索参与相关性评估任务的人群的人文特征。该研究考虑了三个因素或变量-向评估者提供的报酬,评估者花费的精力以及评估者的资格-并检查了这些变量对相关标签的影响。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号