首页> 外文期刊>Physics in medicine and biology. >Deep learning with cinematic rendering: fine-tuning deep neural networks using photorealistic medical images
【24h】

Deep learning with cinematic rendering: fine-tuning deep neural networks using photorealistic medical images

机译:与电影渲染的深度学习:微调深层神经网络,使用质型医学图像

获取原文
获取原文并翻译 | 示例
       

摘要

Deep learning has emerged as a powerful artificial intelligence tool to interpret medical images for a growing variety of applications. However, the paucity of medical imaging data with high-quality annotations that is necessary for training such methods ultimately limits their performance. Medical data is challenging to acquire due to privacy issues, shortage of experts available for annotation, limited representation of rare conditions and cost. This problem has previously been addressed by using synthetically generated data. However, networks trained on synthetic data often fail to generalize to real data. Cinematic rendering simulates the propagation and interaction of light passing through tissue models reconstructed from CT data, enabling the generation of photorealistic images. In this paper, we present one of the first applications of cinematic rendering in deep learning, in which we propose to fine-tune synthetic data-driven networks using cinematically rendered CT data for the task of monocular depth estimation in endoscopy. Our experiments demonstrate that: (a) convolutional neural networks (CNNs) trained on synthetic data and fine-tuned on photorealistic cinematically rendered data adapt better to real medical images and demonstrate more robust performance when compared to networks with no fine-tuning, (b) these fine-tuned networks require less training data to converge to an optimal solution, and (c) fine-tuning with data from a variety of photorealistic rendering conditions of the same scene prevents the network from learning patient-specific information and aids in generalizability of the model. Our empirical evaluation demonstrates that networks fine-tuned with cinematically rendered data predict depth with 56.87% less error for rendered endoscopy images and 27.49% less error for real porcine colon endoscopy images.
机译:深度学习已成为一个强大的人工智能工具,用于解释越来越多的应用程序的医学图像。然而,具有高质量注释的医学成像数据的缺乏最终限制了它们的性能。由于隐私问题收购,医疗数据挑战,可供注释的专家短缺,有限的罕见条件和成本。先前通过使用综合生成的数据来解决此问题。但是,在合成数据上培训的网络通常无法概括到实际数据。电影渲染模拟通过从CT数据重建的组织模型的光的传播和相互作用,从而能够产生光电素质性图像。在本文中,我们展示了深度学习中电影渲染的第一个应用之一,其中我们向微调合成数据驱动网络使用电影渲染的CT数据进行内窥镜检查中单眼深度估计的任务。我们的实验证明:(一)卷积神经网络(细胞神经网络)上训练模拟数据和微调上逼真地运动学呈现的数据更好地适应实际医疗影像和演示更加强劲的性能相比,没有微调,(B网络)这些微调网络需要较少的训练数据来收敛到最佳解决方案,并且(c)与来自相同场景的各种光电态性渲染条件的数据进行微调可防止网络在普遍地学习患者特定的信息和辅助辅助性模型。我们的实证分析表明,网络微调与运动学呈现的数据预测深度56.87%较少的错误呈现内镜图像和27.49%,误差较小真正的猪大肠内窥镜检查图像。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号