【24h】

Domain Knowledge Driven Multi-modal Segmentation of Anatomical Brain Barriers to Cancer Spread

机译:域知识驱动了解剖脑屏障对癌症扩散的多模态分割

获取原文

摘要

It is important to accurately segment anatomical brain barriers to cancer spread with multi-modal images, in order to assist definition of the clinical target volume (CTV). In this work, we explore a multi-modal segmentation method largely driven by domain knowledge. We apply 3D U-Net as the backbone model. In order to reduce the learning difficulty of deep convolutional neural networks, we employ a label merging strategy for the symmetrical structures which have both left and right labels, to highlight the structural information regardless of the locations. Moreover, considering the existence of visual preference for certain modality and mismatches in co-registration, we adopt a multi-modality ensemble strategy for multi-modal learning to enable the models better driven by domain knowledge of this task, which is different from fully data-driven methods, like early fusion strategy for multi-modal images. By contrast, multi-modality ensemble strategy yields better segmentation results. Our method achieved an average score of 0.895 on MICCAI 2020 Anatomical Brain Barriers to Cancer Spread Challenge's final test dataset . Detailed methodologies and results are described in this technical report (This work was done when X. Zou did remote internship with CUHK.).
机译:重要的是要准确地分段癌症与多模态图像扩散的解剖学脑屏障,以帮助定义临床目标体积(CTV)。在这项工作中,我们探讨了一个多模态分段方法,主要由域知识驱动。我们将3D U-Net应用为骨干模型。为了减少深度卷积神经网络的学习难度,我们采用了具有左和右标签的对称结构的标签合并策略,以突出显示结构信息,无论位置如何。此外,考虑到在共同登记中对某些模态和不匹配的视觉偏好的存在,我们采用多种式集合策略进行多模态学习,以使模型更好地通过此任务的域知识驱动,这与完全数据不同-Drive的方法,如多模态图像的早期融合策略。相比之下,多模态集合策略产生了更好的分段结果。我们的方法在Miccai 2020解剖学脑屏障上实现了0.895的平均得分对癌症传播挑战的最终测试数据集。在本技术报告中描述了详细的方法和结果(当X. Zou与Cuhk进行远程实习时完成了这项工作。)。

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号