首页> 外文会议>Multi-Disciplinary International Workshop on Artificial Intelligence >A Comparison of Domain Experts and Crowdsourcing Regarding Concept Relevance Evaluation in Ontology Learning
【24h】

A Comparison of Domain Experts and Crowdsourcing Regarding Concept Relevance Evaluation in Ontology Learning

机译:域专家对本体学习中概念相关评价的域专家和众所周境的比较

获取原文

摘要

Ontology learning helps to bootstrap and simplify the complex and expensive process of ontology construction by semi-automatically generating ontologies from data. As other complex machine learning or NLP tasks, such systems always produce a certain ratio of errors, which make manually refining and pruning the resulting ontologies necessary. Here, we compare the use of domain experts and paid crowdsourcing for verifying domain ontologies. We present extensive experiments with different settings and task descriptions in order to raise the rating quality the task of relevance assessment of new concept candidates generated by the system. With proper task descriptions and settings, crowd workers can provide quality similar to human experts. In case of unclear task descriptions, crowd workers and domain experts often have a very different interpretation of the task at hand - we analyze various types of discrepancy in interpretation.
机译:本体学习有助于通过数据从数据中自动生成本体启动和简化本体构建的复杂和昂贵过程。作为其他复杂机器学习或NLP任务,这种系统总是产生一定的误差比,这使得手动精炼和修剪所产生的本体所必需的。在这里,我们可以比较域专家的使用并付费众群进行验证域本体。我们以不同的设置和任务描述为广泛的实验提供了不同的设置和任务描述,以提高系统生成的新概念候选者相关性评估的额定质量。通过适当的任务描述和设置,人群工人可以提供与人类专家类似的质量。如果任务描述不明确,人群工人和领域专家往往对手的任务往往具有非常不同的解释 - 我们分析了解释中的各种类型的差异。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号