首页> 外文期刊>Pattern recognition letters >On the receptive field misalignment in CAM-based visual explanations
【24h】

On the receptive field misalignment in CAM-based visual explanations

机译:On the receptive field misalignment in CAM-based visual explanations

获取原文
获取原文并翻译 | 示例
           

摘要

Visual explanations aim at providing an understanding of the inner behavior of convolutional neural net-works. Naturally, it is necessary to explore whether these methods themselves are reasonable and reli-able. In this paper, we focus on Class Activation Mapping (CAM), a type of attractive explanations that has been widely applied to model diagnosis and weakly supervised tasks. Our contribution is two-fold. First, we identify an important but neglected issue that affects the reliability of CAM results: there is a misalignment between the effective receptive field and the implicit receptive field, where the former is determined by the model and the input, and the latter is determined by the upsampling function in CAM. Occlusion experiments are designed to empirically testify to its existence. Second, based on this finding, an adversarial marginal attack is proposed to fool the CAM-based method and the CNN model simulta-neously. Experimental results demonstrate that the provided saliency map can be completely changed to another shape by only perturbing the area with 1-pixel width. The prototype code of the method is now available at https://github.com/xpf/CAM- Adversarial- Marginal-Attack . (c) 2021 Elsevier B.V. All rights reserved.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号