首页> 中文期刊> 《人工智能杂志(英文)》 >An Adversarial Attack System for Face Recognition

An Adversarial Attack System for Face Recognition

         

摘要

Deep neural networks(DNNs)are widely adopted in daily life and the security problems of DNNs have drawn attention from both scientific researchers and industrial engineers.Many related works show that DNNs are vulnerable to adversarial examples that are generated with subtle perturbation to original images in both digital domain and physical domain.As a most common application of DNNs,face recognition systems are likely to cause serious consequences if they are attacked by the adversarial examples.In this paper,we implement an adversarial attack system for face recognition in both digital domain that generates adversarial face images to fool the recognition system,and physical domain that generates customized glasses to fool the system when a person wears the glasses.Experiments show that our system attacks face recognition systems effectively.Furthermore,our system could misguide the recognition system to identify a person wearing the customized glasses as a certain target.We hope this research could help raise the attention of artificial intelligence security and promote building robust recognition systems.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号