首页> 外文会议>IEEE International Symposium on Computer-Based Medical Systems >3D Deep Learning for Anatomical Structure Segmentation in Multiple Imaging Modalities
【24h】

3D Deep Learning for Anatomical Structure Segmentation in Multiple Imaging Modalities

机译:三维成像模式中解剖结构细分的深度学习

获取原文

摘要

Accurate, automated quantitative segmentation of anatomical structures in radiological scans, such as Magnetic Resonance Imaging (MRI) and Computer Tomography (CT), can produce significant biomarkers and can be integrated into computer-aided diagnosis (CADx) systems to support the interpretation of medical images from multi-protocol scanners. However, there are serious challenges towards developing robust automated segmentation techniques, including high variations in anatomical structure and size, varying image spatial resolutions resulting from different scanner protocols, and the presence of blurring artefacts. This paper presents a novel computing approach for automated organ and muscle segmentation in medical images from multiple modalities by harnessing the advantages of deep learning techniques in a two-part process. (1) a 3D encoder-decoder, Rb-UNet, builds a localisation model and a 3D Tiramisu network generates a boundary-preserving segmentation model for each target structure; (2) the fully trained Rb-UNet predicts a 3D bounding box encapsulating the target structure of interest, after which the fully trained Tiramisu model performs segmentation to reveal organ or muscle boundaries for every protrusion and indentation. The proposed approach is evaluated on six different datasets, including MRI, Dynamic Contrast Enhanced (DCE) MRI and CT scans targeting the pancreas, liver, kidneys and iliopsoas muscles. We achieve quantitative measures of mean Dice similarity coefficient (DSC) that surpasses or are comparable with the state-of-the-art and demonstrate statistical stability. A qualitative evaluation performed by two independent experts in radiology and radiography verified the preservation of detailed organ and muscle boundaries.
机译:准确的,自动化的放射扫描中解剖结构的定量分割,例如磁共振成像(MRI)和计算机断层扫描(CT),可以产生显着的生物标志物,可以集成到计算机辅助诊断(CADX)系统中,以支持对医疗的解释来自多协议扫描仪的图像。然而,对开发稳健的自动分割技术存在严重挑战,包括从不同扫描仪协议产生的解剖结构和尺寸的高变化,改变图像空间分辨率以及模糊的伪像的存在。本文通过利用两部分过程中的深度学习技术的优点,提出了一种新的自动化器官和肌肉细分的新型计算方法。 (1)3D编码器解码器RB-UNET构建定位模型和3D提拉米苏网络为每个目标结构生成边界保留分段模型; (2)完全训练的RB-UNET预测封装了感兴趣的目标结构的3D边界框,之后,完全训练的提拉米苏模型进行分段以显示每个突出和压痕的器官或肌界。在六种不同的数据集中评估所提出的方法,包括MRI,动态对比增强(DCE)MRI和CT扫描,靶向胰腺,肝脏,肾脏和ILIOPSOAS肌肉。我们达到平均骰子相似系数(DSC)的定量测​​量,其超越或与最先进的和展示统计稳定性。两位独立专家在放射学和射线照相中进行的定性评估验证了详细器官和肌肉边界的保存。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号