...
首页> 外文期刊>IEEE Transactions on Biomedical Engineering >Deep-Learning-Based Multi-Modal Fusion for Fast MR Reconstruction
【24h】

Deep-Learning-Based Multi-Modal Fusion for Fast MR Reconstruction

机译:基于深度学习的多模态融合实现快速MR重建

获取原文
获取原文并翻译 | 示例
           

摘要

T1-weighted image (T1WI) and T2-weighted image (T2WI) are the two routinely acquired magnetic resonance (MR) modalities that can provide complementary information for clinical and research usages. However, the relatively long acquisition time makes the acquired image vulnerable to motion artifacts. To speed up the imaging process, various algorithms have been proposed to reconstruct high-quality images from under-sampled k-space data. However, most of the existing algorithms only rely on mono-modality acquisition for the image reconstruction. In this paper, we propose to combine complementary MR acquisitions (i.e., T1WI and under-sampled T2WI particularly) to reconstruct the high-quality image (i.e., corresponding to the fully sampled T2WI). To the best of our knowledge, this is the first work to fuse multi-modal MR acquisitions through deep learning to speed up the reconstruction of a certain target image. Specifically, we present a novel deep learning approach, namely Dense-Unet, to accomplish the reconstruction task. The proposed Dense-Unet requires fewer parameters and less computation, while achieving promising performance. Our results have shown that Dense-Unet can reconstruct a three-dimensional T2WI volume in less than 10 s with an under-sampling rate of 8 for the k-space and negligible aliasing artifacts or signal-noise-ratio loss. Experiments also demonstrate excellent transferring capability of Dense-Unet when applied to the datasets acquired by different MR scanners. The above-mentioned results imply great potential of our method in many clinical scenarios.
机译:T1加权图像(T1WI)和T2加权图像(T2WI)是两种常规获得的磁共振(MR)方式,可以为临床和研究用途提供补充信息。然而,相对长的获取时间使得所获取的图像易受运动伪像的影响。为了加快成像过程,已经提出了各种算法来从欠采样的k空间数据中重建高质量图像。然而,大多数现有算法仅依靠单模态获取来进行图像重建。在本文中,我们建议结合互补的MR采集(即T1WI和欠采样的T2WI)来重建高质量图像(即对应于完全采样的T2WI)。就我们所知,这是通过深度学习融合多模式MR采集以加快特定目标图像重建速度的第一项工作。具体来说,我们提出了一种新颖的深度学习方法,即Dense-Unet,以完成重建任务。拟议的Dense-Unet需要更少的参数和更少的计算,同时又能实现有希望的性能。我们的结果表明,Dense-Unet可以在不到10 s的时间内重建三维T2WI体积,k空间的欠采样率为8,混叠伪影或信号噪声比损失可以忽略不计。实验还表明,将Dense-Unet应用于不同MR扫描仪采集的数据集时,具有出色的传输能力。上述结果暗示了我们的方法在许多临床场景中的巨大潜力。

著录项

  • 来源
    《IEEE Transactions on Biomedical Engineering》 |2019年第7期|2105-2114|共10页
  • 作者单位

    Shanghai Jiao Tong Univ, Sch Biomed Engn, Inst Med Imaging Technol, Shanghai 200052, Peoples R China;

    Univ N Carolina, Dept Radiol, Chapel Hill, NC 27515 USA|Univ N Carolina, BRIC, Chapel Hill, NC 27515 USA;

    Univ N Carolina, Dept Radiol, Chapel Hill, NC 27515 USA|Univ N Carolina, BRIC, Chapel Hill, NC 27515 USA;

    Shanghai Jiao Tong Univ, Sch Biomed Engn, Inst Med Imaging Technol, Shanghai 200052, Peoples R China;

    Univ N Carolina, Dept Radiol, Chapel Hill, NC 27515 USA|Univ N Carolina, BRIC, Chapel Hill, NC 27515 USA;

    Shanghai Jiao Tong Univ, Sch Biomed Engn, Inst Med Imaging Technol, Shanghai 200052, Peoples R China;

    Univ N Carolina, Dept Radiol, Chapel Hill, NC 27515 USA|Univ N Carolina, BRIC, Chapel Hill, NC 27515 USA|Korea Univ, Dept Brain & Cognit Engn, Seoul, South Korea;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Deep learning; dense block; fast MR reconstruction; multi-model fusion;

    机译:深度学习;密集块;快速MR重建;多模型融合;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号