首页> 外文会议>IEEE International Conference on Image Processing >Security of Facial Forensics Models Against Adversarial Attacks
【24h】

Security of Facial Forensics Models Against Adversarial Attacks

机译:面对取证攻击的面部取证模型的安全性

获取原文

摘要

Deep neural networks (DNNs) have been used in digital forensics to identify fake facial images. We investigated several DNN-based forgery forensics models (FFMs) to examine whether they are secure against adversarial attacks. We experimentally demonstrated the existence of individual adversarial perturbations (IAPs) and universal adversarial perturbations (UAPs) that can lead a well-performed FFM to misbehave. Based on iterative procedure, gradient information is used to generate two kinds of IAPs that can be used to fabricate classification and segmentation outputs. In contrast, UAPs are generated on the basis of over-firing. We designed a new objective function that encourages neurons to over-fire, which makes UAP generation feasible even without using training data. Experiments demonstrated the transferability of UAPs across unseen datasets and unseen FFMs. Moreover, we conducted subjective assessment for imperceptibility of the adversarial perturbations, revealing that the crafted UAPs are visually negligible. These findings provide a baseline for evaluating the adversarial security of FFMs.
机译:深度神经网络(DNN)已用于数字取证中,以识别伪造的面部图像。我们研究了几种基于DNN的伪造取证模型(FFM),以检查它们是否对付对抗攻击是安全的。我们通过实验证明了单个对抗性扰动(IAP)和通用对抗性扰动(UAP)的存在,这些行为可能导致性能良好的FFM行为异常。基于迭代过程,梯度信息用于生成两种IAP,可用于制造分类和分段输出。相反,UAP是基于过度射击生成的。我们设计了一个新的目标函数来鼓励神经元过度射击,这使得即使不使用训练数据也可以生成UAP。实验证明了UAP在看不见的数据集和FFM之间的可传递性。此外,我们对对抗性扰动的不可感知性进行了主观评估,发现制作的UAP在视觉上可以忽略不计。这些发现为评估FFM的对抗安全性提供了基准。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号