首页> 外文会议>International conference on medical imaging computing and computer-assisted intervention >Generalizability vs. Robustness: Investigating Medical Imaging Networks Using Adversarial Examples
【24h】

Generalizability vs. Robustness: Investigating Medical Imaging Networks Using Adversarial Examples

机译:普遍性与稳健性:使用对抗性示例研究医学成像网络

获取原文

摘要

In this paper, for the first time, we propose an evaluation method for deep learning models that assesses the performance of a model not only in an unseen test scenario, but also in extreme cases of noise, outliers and ambiguous input data. To this end, we utilize adversarial examples, images that fool machine learning models, while looking imperceptibly different from original data, as a measure to evaluate the robustness of a variety of medical imaging models. Through extensive experiments on skin lesion classification and whole brain segmentation with state-of-the-art networks such as Inception and UNet, we show that models that achieve comparable performance regarding generalizability may have significant variations in their perception of the underlying data manifold, leading to an extensive performance gap in their robustness.
机译:在本文中,我们首次提出了一种深度学习模型的评估方法,该方法不仅可以在看不见的测试场景中,而且可以在噪声,异常值和输入数据模棱两可的极端情况下评估模型的性能。为此,我们利用对抗性示例,使机器学习模型变得愚蠢的图像,同时看起来与原始数据没有明显的差异,以此来评估各种医学成像模型的鲁棒性。通过使用Inception和UNet等先进网络对皮肤病变分类和全脑分割进行广泛的实验,我们表明,在泛化性方面达到可比性能的模型可能对基础数据流形的感知有很大差异,从而导致在健壮性方面存在巨大的性能差距。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号