首页> 外文会议>International Conference on Digital Image Processing >Detection of sticker based adversarial attacks
【24h】

Detection of sticker based adversarial attacks

机译:检测基于贴纸的对抗攻击

获取原文

摘要

Adversarial examples revealed and important aspect of convolutional neural networks and are getting more and more attention in machine learning. It was shown that not only small perturbations, covering the whole image can be applied but also sticker based attacks, concentrated on small regions of the image can cause misclassification. Meanwhile the first type of attack is theoretical the later can be applied in practice and lead tomisclassification in image processing pipelines. In this paper we show a method how sticker based adversarial samples can be detected by calculating the responses of the neurons in the last layers and estimating the measure of region based classification consistency.
机译:对手的例子揭示了卷积神经网络的重要方面,并且在机器学习中越来越受到关注。结果表明,不仅可以应用覆盖整个图像的小扰动,还可以应用于贴纸的攻击,集中在图像的小区域上可能导致错误分类。同时,第一种类型的攻击是理论上的攻击可以在实践中应用并在图像处理管道中引入大分类。在本文中,我们示出了一种方法,通过计算最后层中神经元的响应并估计基于区域的分类一致性的度量来检测贴纸的对抗性样本。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号