...
首页> 外文期刊>ScientificWorldJournal >Creation of Reliable Relevance Judgments in Information Retrieval Systems Evaluation Experimentation through Crowdsourcing: A Review
【24h】

Creation of Reliable Relevance Judgments in Information Retrieval Systems Evaluation Experimentation through Crowdsourcing: A Review

机译:通过众包创建信息检索系统评估实验中可靠的相关性判断:综述

获取原文
           

摘要

Test collection is used to evaluate the information retrieval systems in laboratory-based evaluation experimentation. In a classic setting, generating relevance judgments involves human assessors and is a costly and time consuming task. Researchers and practitioners are still being challenged in performing reliable and low-cost evaluation of retrieval systems. Crowdsourcing as a novel method of data acquisition is broadly used in many research fields. It has been proven that crowdsourcing is an inexpensive and quick solution as well as a reliable alternative for creating relevance judgments. One of the crowdsourcing applications in IR is to judge relevancy of query document pair. In order to have a successful crowdsourcing experiment, the relevance judgment tasks should be designed precisely to emphasize quality control. This paper is intended to explore different factors that have an influence on the accuracy of relevance judgments accomplished by workers and how to intensify the reliability of judgments in crowdsourcing experiment.
机译:测试集合用于评估基于实验室的评估实验中的信息检索系统。在经典环境中,产生相关性判决涉及人类评估员,并且是一种昂贵和耗时的任务。研究人员和从业者仍然受到对检索系统的可靠和低成本评估的挑战。作为一种新颖的数据采集方法,广泛用于许多研究领域。据证明,众包是一种廉价且快速的解决方案,以及创造相关性判断的可靠替代品。 IR中的一个众包应用程序是判断查询文档对的相关性。为了具有成功的众包实验,应精确设计相关判断任务以强调质量控制。本文旨在探讨对工人所拥有的相关判决的准确性以及如何加强众包实验中判断可靠性的不同因素。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号