首页> 外文会议>International Conference on Machine Vision Applications >Understanding the Reason for Misclassification by Generating Counterfactual Images
【24h】

Understanding the Reason for Misclassification by Generating Counterfactual Images

机译:通过生成反事实图像来了解错误分类原因

获取原文

摘要

Explainable AI (XAI) methods contribute to understanding the behavior of deep neural networks (DNNs), and have attracted interest recently. For example, in image classification tasks, attribution maps have been used to indicate the pixels of an input image that are important to the output decision. Oftentimes, however, it is difficult to understand the reason for misclassification only from a single attribution map. In this paper, in order to enhance the information related to the reason for misclassification, we propose to generate several counterfactual images using generative adversarial networks (GANs). We empirically show that these counterfactual images and their attribution maps improve the interpretability of misclassified images. Furthermore, we additionally propose to generate transitional images by gradually changing the configurations of a GAN in order to understand clearly which part of the misclassified image cause the misclassification.
机译:可解释的AI(XAI)方法有助于了解深度神经网络(DNN)的行为,并最近引起利益。 例如,在图像分类任务中,已经使用归因地图来指示对输出决定重要的输入图像的像素。 然而,通常难以从单个归因地图理解错误分类的原因。 在本文中,为了增强与错误分类原因相关的信息,我们建议使用生成的对抗网络(GAN)生成若干反事实图像。 我们经验证明,这些反事实图像及其归属图改善了错误分类图像的可解释性。 此外,我们还通过逐渐改变GaN的配置来产生过渡图像,以便清楚地理解错误分类的图像的哪个部分导致错误分类。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号