首页> 外文会议>International Conference on Web Information Systems Engineering >An Evaluation of Aggregation Techniques in Crowdsourcing
【24h】

An Evaluation of Aggregation Techniques in Crowdsourcing

机译:众包聚集技术评价

获取原文

摘要

As the volumes of AI problems involving human knowledge are likely to soar, crowdsourcing has become essential in a wide range of world-wide-web applications. One of the biggest challenges of crowdsourcing is aggregating the answers collected from the crowd since the workers might have wide-ranging levels of expertise. In order to tackle this challenge, many aggregation techniques have been proposed. These techniques, however, have never been compared and analyzed under the same setting, rendering a 'right' choice for a particular application very difficult. Addressing this problem, this paper presents a benchmark that offers a comprehensive empirical study on the performance comparison of the aggregation techniques. Specifically, we integrated several state-of-the-art methods in a comparable manner, and measured various performance metrics with our benchmark, including computation time, accuracy, robustness to spammers, and adaptivity to multi-labeling. We then provide in-depth analysis of benchmarking results, obtained by simulating the crowdsourcing process with different types of workers. We believe that the findings from the benchmark will be able to serve as a practical guideline for crowdsourcing applications.
机译:由于涉及人类知识的AI问题的卷很可能飙升,众包在广泛的全球网络应用中都成为必不可少的。众包的最大挑战之一是汇总人群中收集的答案,因为工人可能具有广泛的专业知识。为了解决这一挑战,已经提出了许多聚合技术。然而,这些技术从未比较和分析在相同的设置下,为特定应用程序的选择非常困难。解决此问题,本文提出了一种基准,提供了一个关于聚集技术的性能比较的全面实证研究。具体地,我们以可比的方式集成了多种最先进的方法,并使用我们的基准来测量各种性能指标,包括计算时间,准确性,垃圾邮件发送者的鲁棒性,以及对多标记的适应性。然后,我们深入分析基准结果,通过模拟具有不同类型的工人的众包流程来获得。我们认为基准测试的发现将能够作为众包应用的实用指导。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号