【24h】

Adversarial Face De-Identification

机译:对抗面孔识别

获取原文

摘要

Recently, much research has been done on how to secure personal data, notably facial images. Face de-identification is one example of privacy protection that protects person identity by fooling intelligent face recognition systems, while typically allowing face recognition by human observers. While many face de-identification methods exist, the generated de-identified facial images do not resemble the original ones. This paper proposes the usage of adversarial examples for face de-identification that introduces minimal facial image distortion, while fooling automatic face recognition systems. Specifically, it introduces P-FGVM, a novel adversarial attack method, which operates on the image spatial domain and generates adversarial de-identified facial images that resemble the original ones. A comparison between P-FGVM and other adversarial attack methods shows that P-FGVM both protects privacy and preserves visual facial image quality more efficiently.
机译:近来,关于如何保护个人数据(尤其是面部图像)的安全性已经进行了许多研究。人脸识别是隐私保护的一个示例,该隐私保护通过欺骗智能人脸识别系统来保护人的身份,同时通常允许人类观察者进行人脸识别。尽管存在许多面部去识别方法,但是生成的去识别的面部图像与原始图像不相似。本文提出了使用对抗性示例进行人脸识别的方法,该方法可最大程度地减少人脸图像失真,同时欺骗自动人脸识别系统。具体来说,它引入了一种新颖的对抗攻击方法P-FGVM,该方法在图像空间域上运行并生成类似于原始图像的对抗性未识别面部图像。 P-FGVM与其他对抗攻击方法的比较表明,P-FGVM既可以保护隐私,又可以更有效地保留面部视觉图像的质量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号