首页> 外文会议>International Conference on Biometrics >Adversarial Examples to Fool Iris Recognition Systems
【24h】

Adversarial Examples to Fool Iris Recognition Systems

机译:愚人节虹膜识别系统的对抗示例

获取原文

摘要

Adversarial examples have recently proven to be able to fool deep learning methods by adding carefully crafted small perturbation to the input space image. In this paper, we study the possibility of generating adversarial examples for code-based iris recognition systems. Since generating adversarial examples requires back-propagation of the adversarial loss, conventional filter bank-based iris-code generation frameworks cannot be employed in such a setup. Therefore, to compensate for this shortcoming, we propose to train a deep auto-encoder surrogate network to mimic the conventional iris code generation procedure. This trained surrogate network is then deployed to generate the adversarial examples using the iterative gradient sign method algorithm [15]. We consider non-targeted and targeted attacks through three attack scenarios. Considering these attacks, we study the possibility of fooling an iris recognition system in white-box and black-box frameworks.
机译:对抗性示例最近被证明能够通过向输入空间图像添加精心制作的小扰动来欺骗深度学习方法。在本文中,我们研究了为基于代码的虹膜识别系统生成对抗性示例的可能性。由于生成对抗性示例需要反向传播对抗性损失,因此无法在此类设置中采用基于常规滤波器组的虹膜代码生成框架。因此,为了弥补这一缺点,我们建议训练一个深层的自动编码器替代网络,以模仿传统的虹膜代码生成过程。然后,使用经过训练的代理网络,使用迭代梯度符号方法算法[15]来生成对抗示例。我们通过三种攻击场景来考虑非针对性和针对性的攻击。考虑到这些攻击,我们研究了在白盒和黑盒框架中欺骗虹膜识别系统的可能性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号