首页> 外文期刊>Medical Physics >Fully automatic multi‐organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks
【24h】

Fully automatic multi‐organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks

机译:使用形状表示模型的头部和颈部癌症放射疗法的全自动多器官分段由卷积模型进行了约束力的神经网络

获取原文
获取原文并翻译 | 示例
           

摘要

Purpose Intensity modulated radiation therapy (IMRT) is commonly employed for treating head and neck (H&N) cancer with uniform tumor dose and conformal critical organ sparing. Accurate delineation of organs‐at‐risk (OARs) on H&N CT images is thus essential to treatment quality. Manual contouring used in current clinical practice is tedious, time‐consuming, and can produce inconsistent results. Existing automated segmentation methods are challenged by the substantial inter‐patient anatomical variation and low CT soft tissue contrast. To overcome the challenges, we developed a novel automated H&N OARs segmentation method that combines a fully convolutional neural network (FCNN) with a shape representation model (SRM). Methods Based on manually segmented H&N CT, the SRM and FCNN were trained in two steps: (a) SRM learned the latent shape representation of H&N OARs from the training dataset; (b) the pre‐trained SRM with fixed parameters were used to constrain the FCNN training. The combined segmentation network was then used to delineate nine OARs including the brainstem, optic chiasm, mandible, optical nerves, parotids, and submandibular glands on unseen H&N CT images. Twenty‐two and 10 H&N CT scans provided by the Public Domain Database for Computational Anatomy (PDDCA) were utilized for training and validation, respectively. Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), average surface distance (ASD), and 95% maximum surface distance (95%SD) were calculated to quantitatively evaluate the segmentation accuracy of the proposed method. The proposed method was compared with an active appearance model that won the 2015 MICCAI H&N Segmentation Grand Challenge based on the same dataset, an atlas method and a deep learning method based on different patient datasets. Results An average DSC = 0.870 (brainstem), DSC = 0.583 (optic chiasm), DSC = 0.937 (mandible), DSC = 0.653 (left optic nerve), DSC = 0.689 (right optic nerve), DSC = 0.835 (left parotid), DSC = 0.832 (right parotid), DSC = 0.755 (left submandibular), and DSC = 0.813 (right submandibular) were achieved. The segmentation results are consistently superior to the results of atlas and statistical shape based methods as well as a patch‐wise convolutional neural network method. Once the networks are trained off‐line, the average time to segment all 9 OARs for an unseen CT scan is 9.5 s. Conclusion Experiments on clinical datasets of H&N patients demonstrated the effectiveness of the proposed deep neural network segmentation method for multi‐organ segmentation on volumetric CT scans. The accuracy and robustness of the segmentation were further increased by incorporating shape priors using SMR. The proposed method showed competitive performance and took shorter time to segment multiple organs in comparison to state of the art methods.
机译:目的调强放射疗法(IMRT)通常用于治疗头部和颈部(H&安培; N)癌症具有均匀肿瘤的剂量和共形的关键器官备用。器官高危的精确划定(桨)在H&安培; N CT图像因而对治疗质量至关重要。在目前的临床实践中使用手册轮廓是乏味,费时的,并且能够产生不一致的结果。现有的自动化分割方法是由大量的患者间解剖变异和低CT软组织对比度的挑战。为了克服的挑战,我们开发了自动化的一个新的H&安培;它结合了的形状表示模型(SRM)完全卷积神经网络(FCNN)N危及器官分割方法。方法基于手工分割H&安培; N CT,所述SRM和FCNN在两个步骤进行了培训:(一)SRM了解到的H&安培潜形状表示;从所述训练数据集Ñ桨, (b)与固定的参数预先训练的SRM被用来约束FCNN训练。然后将合并的分割网络被用于描绘9危及器官包括脑干,视交叉,下颌骨,光学神经,腮腺,以及看不见的H&安培颌下腺; N CT图像。二十二个和10 H&安培; N CT由公共领域的数据库用于计算剖析(PDDCA)提供扫描,分别用于训练和验证。骰子相似系数(DSC),阳性预测值(PPV),灵敏度(SEN),平均表面距离(ASD),和95%的最大表面距离(95%SD)进行了计算,以定量地评价所提出的方法的分割精度。 Ñ分割大挑战基于相同的数据集,图谱的方法和基于不同患者的数据集深学习方法;所提出的方法与赢得2015 MICCAI H&安培活性外观模型进行比较。结果的平均值DSC = 0.870(脑干),DSC = 0.583(视交叉),DSC = 0.937(下颌骨),DSC = 0.653(左视神经),DSC = 0.689(右视神经),DSC = 0.835(左腮腺) ,DSC = 0.832(右腮腺),DSC = 0.755(左颌下),和DSC = 0.813(右颌下)得以实现。分割结果是一致优于图谱和统计形状为基础的方法的结果,以及作为贴片明智卷积神经网络方法。一旦网络被离线训练,的平均时间段的所有9危及器官为看不见的CT扫描是9.5秒。结论上的H&安培临床实验数据集; N患者表现出对容积CT扫描多器官分割所提出的深层神经网络分割方法的有效性。分割的精度和鲁棒性进一步增加了结合使用SMR形状先验。该方法显示出有竞争力的性能和花更短的时间段多器官相比于现有技术方法的状态。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号