首页> 外文会议>Intravascular Imaging and Computer Assisted Stenting, and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis >Crowdsourcing Labels for Pathological Patterns in CT Lung Scans: Can Non-experts Contribute Expert-Quality Ground Truth?
【24h】

Crowdsourcing Labels for Pathological Patterns in CT Lung Scans: Can Non-experts Contribute Expert-Quality Ground Truth?

机译:CT肺扫描中病理模式的众包标签:非专家能否贡献专家级的地面真理?

获取原文
获取原文并翻译 | 示例

摘要

This paper investigates what quality of ground truth might be obtained when crowdsourcing specialist medical imaging ground truth from non-experts. Following basic tuition, 34 volunteer participants independently delineated regions belonging to 7 pathological patterns in 20 scans according to expert-provided pattern labels. Participants' annotations were compared to a set of reference annotations using Dice similarity coefficient (DSC), and found to range between 0.41 and 0.77. The reference repeatability was 0.81. Analysis of prior imaging experience, annotation behaviour, scan ordering and time spent showed that only the last was correlated with annotation quality. Multiple observers combined by voxelwise majority vote outperformed a single observer, matching the reference repeatability for 5 of 7 patterns. In conclusion, crowdsourcing from non-experts yields acceptable quality ground truth, given sufficient expert task supervision and a sufficient number of observers per scan.
机译:本文研究了从非专家众包专家医学影像基础知识时可以得到的基础知识质量。基本费用后,根据专家提供的模式标签,共有34名志愿者参加了20次扫描,分别划出了属于7种病理模式的区域。使用Dice相似系数(DSC)将参与者的注释与一组参考注释进行比较,发现其范围在0.41到0.77之间。参考重复性为0.81。对先前成像经验,注释行为,扫描顺序和所花费时间的分析表明,只有最后一个与注释质量相关。由voxelwise多数票组合的多个观察者的表现胜过单个观察者,在7个模式中有5个与参考重复性匹配。总之,在足够的专家任务监督和每次扫描足够多的观察者的情况下,从非专家进行众包可获得可接受的质量基础事实。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号