首页> 美国卫生研究院文献>Journal of the American Medical Informatics Association : JAMIA >Using the wisdom of the crowds to find critical errors in biomedical ontologies: a study of SNOMED CT
【2h】

Using the wisdom of the crowds to find critical errors in biomedical ontologies: a study of SNOMED CT

机译:利用人群的智慧发现生物医学本体中的关键错误:SNOMED CT的研究

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

>Objectives The verification of biomedical ontologies is an arduous process that typically involves peer review by subject-matter experts. This work evaluated the ability of crowdsourcing methods to detect errors in SNOMED CT (Systematized Nomenclature of Medicine Clinical Terms) and to address the challenges of scalable ontology verification.>Methods We developed a methodology to crowdsource ontology verification that uses micro-tasking combined with a Bayesian classifier. We then conducted a prospective study in which both the crowd and domain experts verified a subset of SNOMED CT comprising 200 taxonomic relationships.>Results The crowd identified errors as well as any single expert at about one-quarter of the cost. The inter-rater agreement (κ) between the crowd and the experts was 0.58; the inter-rater agreement between experts themselves was 0.59, suggesting that the crowd is nearly indistinguishable from any one expert. Furthermore, the crowd identified 39 previously undiscovered, critical errors in SNOMED CT (eg, ‘septic shock is a soft-tissue infection’).>Discussion The results show that the crowd can indeed identify errors in SNOMED CT that experts also find, and the results suggest that our method will likely perform well on similar ontologies. The crowd may be particularly useful in situations where an expert is unavailable, budget is limited, or an ontology is too large for manual error checking. Finally, our results suggest that the online anonymous crowd could successfully complete other domain-specific tasks.>Conclusions We have demonstrated that the crowd can address the challenges of scalable ontology verification, completing not only intuitive, common-sense tasks, but also expert-level, knowledge-intensive tasks.
机译:>目标:生物医学本体论的验证是一个艰巨的过程,通常涉及主题专家的同行评审。这项工作评估了众包方法检测SNOMED CT(系统化的医学术语临床术语)中的错误以及应对可扩展本体验证的挑战的能力。>方法我们开发了一种使用以下方法进行众包本体验证的方法:微任务结合贝叶斯分类器。然后,我们进行了一项前瞻性研究,人群和领域专家都验证了SNOMED CT的一个子集,该子集包含200种分类学关系。>结果:人群中发现错误以及大约四分之一的专家成本。人群与专家之间的评分者间一致性(κ)为0.58;专家之间的评分者间协议为0.59,这表明与任何一位专家几乎没有区别。此外,人群识别出SNOMED CT中39个以前未发现的严重错误(例如,“败血性休克是软组织感染”)。>讨论结果表明,人群确实可以识别SNOMED CT中的错误。专家也发现,结果表明我们的方法可能会在类似的本体上表现良好。在没有专家,预算有限或本体太大而无法手动检查错误的情况下,人群可能特别有用。最后,我们的结果表明,在线匿名人群可以成功完成其他特定于域的任务。>结论我们已经证明,人群可以解决可扩展本体验证的挑战,不仅可以完成直观,常识任务,还包括专家级的知识密集型任务。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号