首页> 外文会议>International Conference on Medical Image Computing and Computer-Assisted Intervention >Multimodal Priors Guided Segmentation of Liver Lesions in MRI Using Mutual Information Based Graph Co-Attention Networks
【24h】

Multimodal Priors Guided Segmentation of Liver Lesions in MRI Using Mutual Information Based Graph Co-Attention Networks

机译:使用相互信息的图形共同关注网络,MRI中MRI中肝病变的多模式引导分割

获取原文

摘要

Segmentation of focal liver lesions serves as an essential preprocessing step for initial diagnosis, stage differentiation, and post-treatment efficacy evaluation. Multimodal MRI scans (e.g., T1WI, T2WI) provide complementary information on liver lesions and is widely used for diagnosis. However, some modalities (e.g., T1WI) have high resolution but lack of important visual information (e.g., edge) belonged to other modalities (T2WI), it is significant to enhance tissue lesion quality in T1WI using other modality priors (T2WI) and improve segmentation performance. In this paper, we propose a graph learning based approach with the motivation of extracting modality-specific features efficiently and establishing the regional correspondence effectively between T1WI and T2WI. We first project deep features into a graph domain and employ graph convolution to propagate information across all regions for extraction of modality-specific features. Then we propose a mutual information based graph co-attention module to learn weight coefficients of one bipartite graph, which is constructed by the fully-connection of graphs with different modalities in the graph domain. At last, we get the final refined features for segmentation by re-projection and residual connection. We validate our method on a multimodal MRI liver lesion dataset. Experimental results show that the proposed approach achieves improvement of liver lesion segmentation in T1WI by learning guided features from multimodal priors (T2WI) compared to existing methods.
机译:焦肝病变的分割是初步诊断,阶段分化和后治疗后疗效评估的基本预处理步骤。多模式MRI扫描(例如,T1WI,T2WI)提供有关肝脏病变的互补信息,并且广泛用于诊断。然而,一些模态(例如,T1WI)具有高分辨率,但缺乏属于其他方式(T2WI)的重要视觉信息(例如,边缘),这对于使用其他模态前沿(T2WI)来增强T1WI中的组织病变质量是显着的分割性能。在本文中,我们提出了一种基于曲线图的方法,其具有有效提取模态特征的动机,并在T1WI和T2WI之间有效地建立区域对应。我们首先将深度的功能投入到图形域中,采用图表卷积,以传播所有区域的信息,以提取模态特定功能。然后,我们提出了一种基于相互信息的图形共关节模块来学习一个二分钟图的权重系数,其由图形域中具有不同模态的图形的完全连接来构造。最后,我们通过重新投影和残差连接获得最终精致的功能进行分割。我们在多模式MRI肝脏病变数据集上验证了我们的方法。实验结果表明,与现有方法相比,该方法通过学习多模峰(T2WI)的引导特征来实现T1WI中肝病变分割的改善。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号