首页> 外文会议>International Workshop on Machine Learning in Medical Imaging >Cross-Modal Attention-Guided Convolutional Network for Multi-modal Cardiac Segmentation
【24h】

Cross-Modal Attention-Guided Convolutional Network for Multi-modal Cardiac Segmentation

机译:用于多模态心脏分割的跨模型注意力卷积网络

获取原文

摘要

To leverage the correlated information between modalities to benefit the cross-modal segmentation, we propose a novel cross-modal attention-guided convolutional network for multi-modal cardiac segmentation. In particular, we first employed the cycle-consistency generative adversarial networks to complete the bidirectional image generation (i.e., MR to CT, CT to MR) to help reduce the modal-level inconsistency. Then, with the generated and original MR and CT images, a novel convolutional network is proposed where (1) two encoders learn individual features separately and (2) a common decoder learns shareable features between modalities for a final consistent segmentation. Also, we propose a cross-modal attention module between the encoders and decoder in order to leverage the correlated information between modalities. Our model can be trained in an end-to-end manner. With extensive evaluation on the unpaired CT and MR cardiac images, our method outperforms the baselines in terms of the segmentation performance.
机译:为了利用模式之间的相关信息来利用跨模型分割,我们提出了一种用于多模态心脏细分的新型跨模型注意力卷积网络。特别地,我们首先使用循环一致性生成的对抗网络来完成双向图像生成(即,MR,CT,CT至MR)以帮助降低模态水平不一致。然后,利用生成和原始的MR和CT图像,提出了一种新的卷积网络,其中(1)两个编码器分别学习单个特征,并且(2)公共解码器在模态之间学习最终一致分割之间的可共同特征。此外,我们提出了编码器和解码器之间的跨模型注意模块,以利用模态之间的相关信息。我们的模型可以以端到端的方式培训。随着对未配对的CT和心脏图像的广泛评估,我们的方法在分割性能方面优于基线。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号