首页> 外文会议>IEEE International Conference on Acoustics, Speech and Signal Processing >Cleaning Adversarial Perturbations via Residual Generative Network for Face Verification
【24h】

Cleaning Adversarial Perturbations via Residual Generative Network for Face Verification

机译:通过残差生成网络清除对抗性扰动以进行人脸验证

获取原文

摘要

Deep neural networks (DNNs) have recently achieved impressive performances on various applications. However, recent researches show that DNNs are vulnerable to adversarial perturbations injected into input samples. In this paper, we investigate a defense method for face verification: a deep residual generative network (ResGN) is learned to clean adversarial perturbations. We propose a novel training framework composed of ResGN, pre-trained VGG-Face network and FaceNet network. The parameters of ResGN are optimized by minimizing a joint loss consisting of a pixel loss, a texture loss and a verification loss, in which they measure content errors, subjective visual perception errors and verification task errors between cleaned image and legitimate image respectively. Specially, the latter two are provided by VGG-Face and FaceNet respectively and have essential contributions for improving verification performance of cleaned image. Empirical experiment results validate the effectiveness of the proposed defense method on the Labeled Faces in the Wild (LFW) benchmark dataset.
机译:深度神经网络(DNN)最近在各种应用程序上都取得了令人印象深刻的性能。然而,最近的研究表明,DNN容易受到注入输入样本的对抗性干扰的影响。在本文中,我们研究了一种用于人脸验证的防御方法:学习了深度残差生成网络(ResGN)来清除对抗性干扰。我们提出了一个新颖的训练框架,该框架由ResGN,预训练的VGG-Face网络和FaceNet网络组成。通过最小化由像素损失,纹理损失和验证损失组成的联合损失来优化ResGN的参数,在这些联合损失中,它们分别测量清洁图像和合法图像之间的内容错误,主观视觉错误和验证任务错误。特别地,后两者分别由VGG-Face和FaceNet提供,对提高清洁图像的验证性能做出了重要贡献。经验实验结果验证了在野外(LFW)基准数据集上所提出的防御方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号