首页> 外文会议>International Conference on Medical Image Computing and Computer-Assisted Intervention >Unsupervised Domain Adaptation via Disentangled Representations: Application to Cross-Modality Liver Segmentation
【24h】

Unsupervised Domain Adaptation via Disentangled Representations: Application to Cross-Modality Liver Segmentation

机译:无封信表示无监督的域适应:应用于跨模型肝细分

获取原文

摘要

A deep learning model trained on some labeled data from a certain source domain generally performs poorly on data from different target domains due to domain shifts. Unsupervised domain adaptation methods address this problem by alleviating the domain shift between the labeled source data and the unlabeled target data. In this work, we achieve cross-modality domain adaptation, i.e. between CT and MRI images, via disentangled representations. Compared to learning a one-to-one mapping as the state-of-art CycleGAN, our model recovers a many-to-many mapping between domains to capture the complex cross-domain relations. It preserves semantic feature-level information by finding a shared content space instead of a direct pixelwise style transfer. Domain adaptation is achieved in two steps. First, images from each domain are embedded into two spaces, a shared domain-invariant content space and a domain-specific style space. Next, the representation in the content space is extracted to perform a task. We validated our method on a cross-modality liver segmentation task, to train a liver segmentation model on CT images that also performs well on MRI. Our method achieved Dice Similarity Coefficient (DSC) of 0.81, outperforming a CycleGAN-based method of 0.72. Moreover, our model achieved good generalization to joint-domain learning, in which unpaired data from different modalities are jointly learned to improve the segmentation performance on each individual modality. Lastly, under a multi-modal target domain with significant diversity, our approach exhibited the potential for diverse image generation and remained effective with DSC of 0.74 on multi-phasic MRI while the CycleGAN-based method performed poorly with a DSC of only 0.52.
机译:由某个源域的一些标记数据培训的深度学习模型通常由于域移位而来自不同目标域的数据的数据不佳。无监督的域适应方法通过减轻标记的源数据和未标记的目标数据之间的域移位来解决此问题。在这项工作中,我们通过解除表示表示实现跨模型域适应,即CT和MRI图像之间。与学习一对一的映射相比,作为最先进的Cyclegan,我们的模型恢复域之间的多对多映射以捕获复杂的跨域关系。它通过查找共享内容空间而不是直接像素样式传输来保留语义特征级信息。域适应分两步实现。首先,来自每个域的图像嵌入到两个空格中,共享域不变内容空间和特定于域的样式空间。接下来,提取内容空间中的表示以执行任务。我们在跨模型肝分段任务上验证了我们的方法,用于在CT图像上培训肝脏分段模型,该模型也在MRI上表现良好。我们的方法实现了0.81的骰子相似系数(DSC),优于0.72的基于Cyclean基方法。此外,我们的模型对联合域学习实现了良好的概括,其中来自不同方式的未配对数据是共同学习的,以改善每种方式的分割性能。最后,在具有大量多样性的多模态目标域下,我们的方法表现出各种图像产生的可能性,并且在多阶段MRI上对0.74的DSC持续有效,而基于Cyclegan的方法只有0.52的DSC进行。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号