首页> 外文会议>Multi-disciplinary international workshop on artificial intelligence >A Comparison of Domain Experts and Crowdsourcing Regarding Concept Relevance Evaluation in Ontology Learning
【24h】

A Comparison of Domain Experts and Crowdsourcing Regarding Concept Relevance Evaluation in Ontology Learning

机译:领域学习中概念相关性评估领域专家和众包的比较

获取原文

摘要

Ontology learning helps to bootstrap and simplify the complex and expensive process of ontology construction by semi-automatically generating ontologies from data. As other complex machine learning or NLP tasks, such systems always produce a certain ratio of errors, which make manually refining and pruning the resulting ontologies necessary. Here, we compare the use of domain experts and paid crowdsourcing for verifying domain ontologies. We present extensive experiments with different settings and task descriptions in order to raise the rating quality the task of relevance assessment of new concept candidates generated by the system. With proper task descriptions and settings, crowd workers can provide quality similar to human experts. In case of unclear task descriptions, crowd workers and domain experts often have a very different interpretation of the task at hand - we analyze various types of discrepancy in interpretation.
机译:本体学习通过半自动从数据生成本体来帮助引导和简化复杂且昂贵的本体构建过程。与其他复杂的机器学习或NLP任务一样,此类系统始终会产生一定比例的错误,这使得手动优化和修剪所得的本体成为必要。在这里,我们比较了领域专家和付费众包在验证领域本体方面的使用情况。我们提出了具有不同设置和任务描述的广泛实验,以提高评级质量,即系统生成的新概念候选者的相关性评估任务。通过正确的任务描述和设置,人群工作者可以提供与人类专家相似的质量。在任务描述不清楚的情况下,人群工作者和领域专家通常对手头的任务有非常不同的解释-我们分析了解释中的各种差异。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号