首页> 外文会议>Annual International Conference of the IEEE Engineering in Medicine and Biology Society >Deep Learning in ex-vivo Lung Cancer Discrimination using Fluorescence Lifetime Endomicroscopic Images
【24h】

Deep Learning in ex-vivo Lung Cancer Discrimination using Fluorescence Lifetime Endomicroscopic Images

机译:使用荧光寿命内镜图像在离体肺癌鉴别中进行深度学习

获取原文

摘要

Fluorescence lifetime is effective in discriminating cancerous tissue from normal tissue, but conventional discrimination methods are primarily based on statistical approaches in collaboration with prior knowledge. This paper investigates the application of deep convolutional neural networks (CNNs) for automatic differentiation of ex-vivo human lung cancer via fluorescence lifetime imaging. Around 70,000 fluorescence images from ex-vivo lung tissue of 14 patients were collected by a custom fibre-based fluorescence lifetime imaging endomicroscope. Five state-of-the-art CNN models, namely ResNet, ResNeXt, Inception, Xception, and DenseNet, were trained and tested to derive quantitative results using accuracy, precision, recall, and the area under receiver operating characteristic curve (AUC) as the metrics. The CNNs were firstly evaluated on lifetime images. Since fluorescence lifetime is independent of intensity, further experiments were conducted by stacking intensity and lifetime images together as the input to the CNNs. As the original CNNs were implemented for RGB images, two strategies were applied. One was retaining the CNNs by putting intensity and lifetime images in two different channels and leaving the remaining channel blank. The other was adapting the CNNs for two-channel input. Quantitative results demonstrate that the selected CNNs are considerably superior to conventional machine learning algorithms. Combining intensity and lifetime images introduces noticeable performance gain compared with using lifetime images alone. In addition, the CNNs with intensity-lifetime RGB image is comparable to the modified two-channel CNNs with intensity-lifetime two-channel input for accuracy and AUC, but significantly better for precision and recall.
机译:荧光寿命在区分癌组织和正常组织方面是有效的,但是传统的鉴别方法主要是基于统计方法和先验知识。本文研究了深度卷积神经网络(CNN)在通过荧光寿命成像自动区分离体人肺癌中的应用。通过定制的基于纤维的荧光寿命成像内窥镜,从14位患者的离体肺组织中收集了大约70,000张荧光图像。五个最先进的CNN模型(即ResNet,ResNeXt,Inception,Xception和DenseNet)经过培训和测试,可使用准确性,精度,召回率和接收器工作特征曲线(AUC)下面积作为定量来得出定量结果。指标。首先在生命周期图像上评估CNN。由于荧光寿命与强度无关,因此通过将强度和寿命图像堆叠在一起作为CNN的输入进行了进一步的实验。由于最初的CNN用于RGB图像,因此应用了两种策略。一种是保留CNN,方法是将强度和寿命图像放在两个不同的通道中,而将其余通道保留为空白。另一个是将CNN调整为两通道输入。定量结果表明,所选的CNN明显优于传统的机器学习算法。与仅使用生命周期图像相比,将强度图像和生命周期图像相结合可带来显着的性能提升。此外,具有生命周期RGB图像的CNN可以与经过修改的具有生命周期两通道输入的CNN相比,在准确性和AUC方面具有可比性,但在精度和召回率方面要好得多。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号