【24h】

Multimodal Cardiac Segmentation Using Disentangled Representation Learning

机译:基于解缠表示学习的多峰心脏分割

获取原文

摘要

Magnetic Resonance (MR) protocols use several sequences to evaluate pathology and organ status. Yet, despite recent advances, the analysis of each sequence's images (modality hereafter) is treated in isolation. We propose a method suitable for multimodal and multi-input learning and analysis, that disentangles anatomical and imaging factors, and combines anatomical content across the modalities to extract more accurate segmentation masks. Mis-registrations between the inputs are handled with a Spatial Transformer Network, which non-linearly aligns the (now intensity-invariant) anatomical factors. We demonstrate applications in Late Gadolinium Enhanced (LGE) and cine MRI segmentation. We show that multi-input outperforms single-input models, and that we can train a (semi-supervised) model with few (or no) annotations for one of the modalities. Code is available at https://github.com/ agis85/multimodal_segmentation.
机译:磁共振(MR)协议使用多个序列来评估病理和器官状态。然而,尽管有最近的进展,但是对每个序列图像的分析(此后称为模态)仍被单独处理。我们提出了一种适用于多模式和多输入学习和分析的方法,该方法可以分解解剖和成像因素,并结合整个模态中的解剖内容以提取更准确的分割蒙版。输入之间的错误配准由空间变压器网络处理,该网络非线性地对齐(现在是强度不变的)解剖因素。我们展示了晚期Ga增强(LGE)和电影MRI分割中的应用。我们证明了多输入优于单输入模型,并且我们可以训练一个(半监督)模型,其中很少(或根本没有)对一种模式的注释。可以从https://github.com/agis85/multimodal_segmentation获得代码。

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号