首页> 外文会议>IEEE Security and Privacy Workshops >Clipped BagNet: Defending Against Sticker Attacks with Clipped Bag-of-features
【24h】

Clipped BagNet: Defending Against Sticker Attacks with Clipped Bag-of-features

机译:Cliped Bagnet:防止贴纸攻击与夹袋 - 功能袋

获取原文

摘要

Many works have demonstrated that neural networks are vulnerable to adversarial examples. We examine the adversarial sticker attack, where the attacker places a sticker somewhere on an image to induce it to be misclassified. We take a first step towards defending against such attacks using clipped BagNet, which bounds the influence that any limited-size sticker can have on the final classification. We evaluate our scheme on ImageNet and show that it provides strong security against targeted PGD attacks and gradient-free attacks, and yields certified security for a 95% of images against a targeted 20 × 20 pixel attack.
机译:许多作品表明神经网络容易受到对抗的例子。我们检查了对抗性贴纸攻击,攻击者在某处贴在图像上的贴纸,以诱导它被错误分类。我们采取第一步努力防止使用剪切百分之方的这种攻击,这限制了任何有限尺寸的贴纸可以对最终分类的影响。我们在想象中评估了我们的计划,并表明它为目标的PGD攻击和渐变攻击提供了强大的安全性,并为目标20×20像素攻击产生95%的图像来产生认证的安全性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号