...
首页> 外文期刊>Medical Physics >Grading of hepatocellular carcinoma based on diffusion weighted images with multiple b‐values using convolutional neural networks
【24h】

Grading of hepatocellular carcinoma based on diffusion weighted images with multiple b‐values using convolutional neural networks

机译:基于卷积神经网络的多个B值的扩散加权图像的肝细胞癌分级

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Purpose To effectively grade hepatocellular carcinoma (HCC) based on deep features derived from diffusion weighted images (DWI) with multiple b‐values using convolutional neural networks (CNN). Materials and Methods Ninety‐eight subjects with 100 pathologically confirmed HCC lesions from July 2012 to October 2018 were included in this retrospective study, including 47 low‐grade and 53 high‐grade HCCs. DWI was performed for each subject with a 3.0T MR scanner in a breath‐hold routine with three b‐values (0,100, and 600?s/mm 2 ). First, logarithmic transformation was performed on original DWI images to generate log maps (logb0, logb100, and logb600). Then, a resampling method was performed to extract multiple 2D axial planes of HCCs from the log map to increase the dataset for training. Subsequently, 2D CNN was used to extract deep features of the log map for HCCs. Finally, fusion of deep features derived from three b‐value log maps was conducted for HCC malignancy classification. Specifically, a deeply supervised loss function was devised to further improve the performance of lesion characterization. The data set was split into two parts: the training and validation set (60 HCCs) and the fixed test set (40 HCCs). Four‐fold cross validation with 10 repetitions was performed to assess the performance of deep features extracted from single b‐value images for HCC grading using the training and validation set. Receiver operating characteristic curve (ROC) and area under the curve (AUC) values were used to assess the characterization performance of the proposed deep feature fusion method to differentiate low‐grade and high‐grade in the fixed test set. Results The proposed fusion of deep features derived from logb0, logb100, and logb600 with deeply supervised loss function generated the highest accuracy for HCC grading (80%), thus outperforming the method of deep feature derived from the ADC map directly (72.5%), the original b0 (65%), b100 (68%), and b600 (70%) images. Furthermore, AUC values of the deep features of the ADC map, the deep feature fusion with concatenation, and the proposed deep feature fusion with deeply supervised loss function were 0.73, 0.78, and 0.83, respectively. Conclusion The proposed fusion of deep features derived from the logarithm of the three b‐value images yields high performance for HCC grading, thus providing a promising approach for the assessment of DWI in lesion characterization.
机译:目的基于从扩散加权图像(DWI)使用卷积神经网络(CNN)倍数b-值导出深特征有效级肝细胞癌(HCC)。材料和方法90个科目,100个病理确诊的肝癌病灶,从2012年7月至2018年十月被列入此回顾性研究中,包括47个低品位和53高档肝癌。 DWI被用于与在屏气例程的3.0T MR扫描器具有三个b值的每个受试者进行(0100,和600?s /平方2)。首先,对数变换被上原始图像DWI执行以产生日志映射(logb0,logb100,和logb600)。然后,进行再取样的方法来从日志中提取地图肝细胞癌多个2D轴向平面,以增加训练数据集。随后,2D CNN来提取日志的地图为肝癌的深特征。最后,从三个b值日志地图所导出深特征融合HCC恶性分类被进行。具体而言,深深监督损失函数被设计成进一步提高病灶表征的性能。该数据集被分成两个部分:训练和验证组(60个肝癌)和固定测试组(40个肝癌)。用10次重复进行,以评估从单一b值图像中提取用于使用训练和验证集HCC分级深特征的性能四倍交叉验证。接收机的曲线下操作特性曲线(ROC)和面积(AUC)值被用于评估所提出的深特征融合方法来区分低级别和高级别在固定的测试集的表征的性能。结果产生了用于HCC分级(80%)最高的准确性,从而超越深特征的从ADC直接映射(72.5%)产生的方法的从logb0,logb100和logb600衍生自具有深深监督损失函数深特征所提出的融合,原始B0(65%),B100(68%),和B600(70%)的图像。此外,的ADC图,与级联的深特征融合,并用深监督损失函数所提出的深特征融合的深特征AUC值分别为0.73,0.78,和0.83。结论来自三个b值图像的对数衍生深特征所提出的融合产生对HCC分级,从而提供了DWI的病灶表征评估有前途的方法高性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号