首页> 美国卫生研究院文献>other >CROWDSOURCING IMAGE ANNOTATION FOR NUCLEUS DETECTION AND SEGMENTATION IN COMPUTATIONAL PATHOLOGY: EVALUATING EXPERTS AUTOMATED METHODS AND THE CROWD
【2h】

CROWDSOURCING IMAGE ANNOTATION FOR NUCLEUS DETECTION AND SEGMENTATION IN COMPUTATIONAL PATHOLOGY: EVALUATING EXPERTS AUTOMATED METHODS AND THE CROWD

机译:在计算病理学中用于核检测和分割的众包图像标注:评估专家自动方法和人群

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

The development of tools in computational pathology to assist physicians and biomedical scientists in the diagnosis of disease requires access to high-quality annotated images for algorithm learning and evaluation. Generating high-quality expert-derived annotations is time-consuming and expensive. We explore the use of crowdsourcing for rapidly obtaining annotations for two core tasks in computational pathology: nucleus detection and nucleus segmentation. We designed and implemented crowdsourcing experiments using the CrowdFlower platform, which provides access to a large set of labor channel partners that accesses and manages millions of contributors worldwide. We obtained annotations from four types of annotators and compared concordance across these groups. We obtained: crowdsourced annotations for nucleus detection and segmentation on a total of 810 images; annotations using automated methods on 810 images; annotations from research fellows for detection and segmentation on 477 and 455 images, respectively; and expert pathologist-derived annotations for detection and segmentation on 80 and 63 images, respectively. For the crowdsourced annotations, we evaluated performance across a range of contributor skill levels (1, 2, or 3). The crowdsourced annotations (4,860 images in total) were completed in only a fraction of the time and cost required for obtaining annotations using traditional methods. For the nucleus detection task, the research fellow-derived annotations showed the strongest concordance with the expert pathologist-derived annotations (F−M =93.68%), followed by the crowd-sourced contributor levels 1,2, and 3 and the automated method, which showed relatively similar performance (F−M = 87.84%, 88.49%, 87.26%, and 86.99%, respectively). For the nucleus segmentation task, the crowdsourced contributor level 3-derived annotations, research fellow-derived annotations, and automated method showed the strongest concordance with the expert pathologist-derived annotations (F−M = 66.41%, 65.93%, and 65.36%, respectively), followed by the contributor levels 2 and 1 (60.89% and 60.87%, respectively). When the research fellows were used as a gold-standard for the segmentation task, all three contributor levels of the crowdsourced annotations significantly outperformed the automated method (F−M = 62.21%, 62.47%, and 65.15% vs. 51.92%). Aggregating multiple annotations from the crowd to obtain a consensus annotation resulted in the strongest performance for the crowd-sourced segmentation. For both detection and segmentation, crowd-sourced performance is strongest with small images (400 × 400 pixels) and degrades significantly with the use of larger images (600 × 600 and 800 × 800 pixels). We conclude that crowdsourcing to non-experts can be used for large-scale labeling microtasks in computational pathology and offers a new approach for the rapid generation of labeled images for algorithm development and evaluation.
机译:开发用于帮助医师和生物医学科学家诊断疾病的计算病理学工具,需要获得用于算法学习和评估的高质量带注释的图像。生成高质量的专家得出的注释既费时又昂贵。我们探索使用众包快速获取计算病理学中两个核心任务的注释:核检测和核分割。我们使用CrowdFlower平台设计和实施了众包实验,该平台提供了对大量劳动力渠道合作伙伴的访问权限,这些合作伙伴可以访问和管理全球数以百万计的贡献者。我们从四种类型的注释器中获取了注释,并比较了这些组之间的一致性。我们获得了:众包批注,用于在810张图像上进行核检测和分割;使用自动化方法对810张图像进行注释;来自研究人员的注释,分别用于对477和455张图像进行检测和分割;以及专家病理学家衍生的注释,分别用于在80和63张图像上进行检测和分割。对于众包注释,我们评估了一系列贡献者技能水平(1、2或3)的表现。众包注释(共4860张图像)仅用传统方法获得注释所需的时间和成本的一小部分完成。对于核检测任务,研究人员派生的注解与专家病理学家派生的注解表现出最强的一致性(FM = 93.68%),其次是众筹贡献者级别1,2和3,以及自动化方法,显示出相对相似的性能(FM分别为87.84%,88.49%,87.26%和86.99%)。对于核分割任务,众包贡献者级别3注释,研究人员衍生注释和自动化方法与专家病理学家衍生注释之间的一致性最高(FM分别为66.41%,65.93%和65.36%,分别为贡献者级别2和1(分别为60.89%和60.87%)。当使用研究人员作为分割任务的黄金标准时,众包注释的所有三个贡献者级别均明显优于自动化方法(FM分别为62.21%,62.47%和65.15%对51.92%)。从人群中聚集多个注释以获得共识注释,这导致了针对人群来源的分割的最强性能。在检测和分割方面,小图像(400×400像素)的众包性能最强,而使用大图像(600×600和800×800像素)的众包性能显着降低。我们得出的结论是,可以将向非专家的众包用于计算病理学中的大规模标记微任务,并为快速生成标记图像进行算法开发和评估提供了一种新方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号