首页> 外文期刊>JMLR: Workshop and Conference Proceedings >3D-RADNet Extracting labels from DICOM metadata for training general medical domain deep 3D convolution neural networks
【24h】

3D-RADNet Extracting labels from DICOM metadata for training general medical domain deep 3D convolution neural networks

机译:3D-RADNET从DICOM元数据提取标签培训一般医疗领域深3D卷积神经网络

获取原文
           

摘要

Training deep convolution neural network requires a large amount of data to obtain good performance and generalisable results. Transfer learning approaches from datasets such as ImageNet had become important in increasing accuracy and lowering training samples required. However, as of now, there has not been a popular dataset for training 3D volumetric medical images. This is mainly due to the time and expert knowledge required to accurately annotate medical images. In this study, we present a method in extracting labels from DICOM metadata that information on the appearance of the scans to train a medical domain 3D convolution neural network. The labels include imaging modalities and sequences, patient orientation and view, presence of contrast agent, scan target and coverage, and slice spacing. We applied our method and extracted labels from a large amount of cancer imaging dataset from TCIA to train a medical domain 3D deep convolution neural network. We evaluated the effectiveness of using our proposed network in transfer learning a liver segmentation task and found that our network achieved superior segmentation performance (DICE=$90.0%$) compared to training from scratch (DICE=$41.8%$). Our proposed network shows promising results to be used as a backbone network for transfer learning to another task. Our approach along with the utilising our network, can potentially be used to extract features from large-scale unlabelled DICOM datasets.
机译:培训深度卷积神经网络需要大量的数据来获得良好的性能和恒定的结果。在诸如想象网等数据集的转移学习方法在提高准确性和降低所需的培训样本方面变得重要。但是,截至目前,训练3D容量医学图像没有流行的数据集。这主要是由于准确注释医学图像所需的时间和专业知识。在这项研究中,我们提出了一种在DICOM元数据中提取标签的方法,这些方法是关于训练医疗领域3D卷积神经网络的扫描外观的信息。标签包括成像模态和序列,患者取向和视图,造影剂的存在,扫描目标和覆盖以及切片间距。我们从TCIA从大量癌症成像数据集中应用了我们的方法,提取了标签,以训练医疗领域3D深度卷积神经网络。我们评估了使用我们提出的网络在转移学习肝脏分割任务中的有效性,发现我们的网络与从头划痕的培训相比实现了卓越的分割性能(Dice = 90.0%$)。我们所提出的网络显示有希望的结果用作转移学习的骨干网络。我们的方法以及利用我们的网络,可能会用于从大规模未标记的DICOM数据集中提取功能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号