首页> 外文会议>Conference on Medical Imaging: Computer-Aided Diagnosis >Multi-task learning to incorporate clinical knowledge into deep learning for breast cancer diagnosis
【24h】

Multi-task learning to incorporate clinical knowledge into deep learning for breast cancer diagnosis

机译:多任务学习将临床知识纳入深度学习乳腺癌诊断

获取原文

摘要

Deep learning models are traditionally trained purely in a data-driven approach; the information for the model training usually only comes from a single source of the training data. In this work, we investigate how to supply additional clinical knowledge that is associated with the training data. Our goal is to train deep learning models for breast cancer diagnosis using mammogram images. Along with the main classification task between clinically proven cancer vs negative/benign cases, we design two auxiliary tasks each capturing a form of additional knowledge to facilitate the main task. Specifically, one auxiliary task is to classify images according to the radiologist-made BI-RADS diagnosis scores and the other auxiliary task is to classify images in terms of the Bl-RADS breast density categories. We customize a Multi-Task Learning model to jointly perform the three tasks (main task and two auxiliary tasks). We test four deep learning architectures: CBR-Tiny, ResNet18, GoogleNet, and DenseNet and we investigate the benefit of incorporating such knowledge over ImageNet pre-trained models and in the case of randomly initialized models. We run experiments on an internal dataset consisting of screening full field digital mammography images for a total of 1,380 images (341 cancer and 1,039 negative or benign). Our results show that, by adding clinical knowledge conveyed through the two auxiliary tasks to the training process, we can improve the performance of the target task of breast cancer diagnosis, thus highlighting the benefit of incorporating clinical knowledge into data-driven learning to enhance deep learning model training.
机译:深入学习模型传统上纯粹是以数据驱动的方法培训;模型培训的信息通常只来自培训数据的单一来源。在这项工作中,我们调查如何提供与培训数据相关的其他临床知识。我们的目标是使用乳房图像捕获乳腺癌诊断的深入学习模型。随着临床验证癌症与负面/良性案例之间的主要分类任务,我们设计了两个辅助任务,每个辅助任务每次捕获一种额外知识的形式,以便于主要任务。具体地,一个辅助任务是根据放射科医师制作的BI-RADS诊断分数对图像进行分类,另一个辅助任务是根据BL-RADS乳房密度类别对图像进行分类。我们自定义一个多任务学习模型,共同执行三个任务(主要任务和两个辅助任务)。我们测试四个深度学习架构:CBR-Tiny,Reset18,Googlenet和Densenet,我们调查了将这些知识纳入ImageNet预先训练的模型以及随机初始化模型的情况下的益处。我们在内部数据集上运行实验,包括筛选全场数字乳房X线摄影图像,总共1,380张图像(341个癌症和1,039负或良性)。我们的研究结果表明,通过将通过两项辅助任务传达的临床知识进行培训过程,我们可以提高乳腺癌诊断目标任务的性能,从而突出临床知识将临床知识纳入数据驱动的学习中,以增强深度学习模型培训。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号