首页> 外文会议>Joint International Workshop on Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting >Crowdsourcing Labels for Pathological Patterns in CT Lung Scans: Can Non-experts Contribute Expert-Quality Ground Truth?
【24h】

Crowdsourcing Labels for Pathological Patterns in CT Lung Scans: Can Non-experts Contribute Expert-Quality Ground Truth?

机译:CT肺部扫描群体的众群标签:非专家可以贡献专家优质的实践吗?

获取原文

摘要

This paper investigates what quality of ground truth might be obtained when crowdsourcing specialist medical imaging ground truth from non-experts. Following basic tuition, 34 volunteer participants independently delineated regions belonging to 7 pathological patterns in 20 scans according to expert-provided pattern labels. Participants' annotations were compared to a set of reference annotations using Dice similarity coefficient (DSC), and found to range between 0.41 and 0.77. The reference repeatability was 0.81. Analysis of prior imaging experience, annotation behaviour, scan ordering and time spent showed that only the last was correlated with annotation quality. Multiple observers combined by voxelwise majority vote outperformed a single observer, matching the reference repeatability for 5 of 7 patterns. In conclusion, crowdsourcing from non-experts yields acceptable quality ground truth, given sufficient expert task supervision and a sufficient number of observers per scan.
机译:本文调查了在非专家众群专家医学成像地面真相时可以获得哪些地面真理的质量。遵循基本学费,根据专家提供的模式标签,34名志愿者参与者独立描绘了属于7种病理模式的区域。将参与者的注释与使用骰子相似系数(DSC)的一组参考注释进行了比较,并且在0.41和0.77之间的范围内。参考可重复性为0.81。分析先前的成像经验,注释行为,扫描排序和时间所显示,只有最后一个与注释质量相关。通过Voxelwhere大多数投票组合的多个观察者优于单个观察者,匹配参考可重复性为5个模式。总之,非专家的众群产生了可接受的优质基础事实,给予了足够的专家任务监督和每次扫描足够数量的观察员。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号