首页> 外文会议>Annual conference on Neural Information Processing Systems >Scoring Workers in Crowdsourcing: How Many Control Questions are Enough?
【24h】

Scoring Workers in Crowdsourcing: How Many Control Questions are Enough?

机译:在众包中评分工人:多少控制问题足够?

获取原文
获取外文期刊封面目录资料

摘要

We study the problem of estimating continuous quantities, such as prices, probabilities, and point spreads, using a crowdsourcing approach. A challenging aspect of combining the crowd's answers is that workers' reliabilities and biases are usually unknown and highly diverse. Control items with known answers can be used to evaluate workers' performance, and hence improve the combined results on the target items with unknown answers. This raises the problem of how many control items to use when the total number of items each workers can answer is limited: more control items evaluates the workers better, but leaves fewer resources for the target items that are of direct interest, and vice versa. We give theoretical results for this problem under different scenarios, and provide a simple rule of thumb for crowdsourcing practitioners. As a byproduct, we also provide theoretical analysis of the accuracy of different consensus methods.
机译:我们使用众群方法研究估计连续数量的问题,例如价格,概率和点差价。结合人群答案的一个具有挑战性的方面是,工人的可靠性和偏见通常是不明的且高度多样化的。具有已知答案的控制项可用于评估工人的性能,因此提高目标项目的组合结果,具有未知答案。这提出了当每个工人可以回答的项目总数有限时使用多少控制项目的问题:更多的控制项目更好地评估工人,但是为直接兴趣的目标项目留下了更少的资源,反之亦然。在不同的场景下,为此问题提供理论结果,并为众群从业者提供简单的拇指规则。作为副产品,我们还提供了对不同共识方法的准确性的理论分析。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号