首页> 外文会议>International Conference on Machine Learning for Cyber Security >Efficient Defense Against Adversarial Attacks and Security Evaluation of Deep Learning System
【24h】

Efficient Defense Against Adversarial Attacks and Security Evaluation of Deep Learning System

机译:高效防御对抗对抗的深度学习系统的安全评估

获取原文

摘要

Deep neural networks (DNNs) have achieved performance on classical artificial intelligence problems including visual recognition, natural language processing. Unfortunately, recent studies show that machine learning models are suffering from adversarial attacks, resulting in incorrect outputs in the form of purposeful distortions to inputs. For images, such subtle distortions are usually hard to be perceptible, yet they successfully fool machine learning models. In this paper, we propose a strategy, FeaturePro, for defending machine learning models against adversarial examples and evaluating the security of deep learning system. We tackle this challenge by reducing the visible feature space for adversary. By performing white-box attacks, black-box attacks, targeted attacks and non-targeted attacks, the security of deep learning algorithms which is an important indicator for evaluating artificial intelligence systems can be evaluated. We analyzed the generalization and robustness when it is composed with adversarial training. FeaturePro has efficient defense against adversarial attacks with a high accuracy and low false positive rates.
机译:深度神经网络(DNN)在包括视觉识别,自然语言处理的典型人工智能问题上实现了性能。遗憾的是,最近的研究表明,机器学习模型正在患有对抗性攻击,导致对输入的有目的扭曲的形式不正确的输出。对于图像而言,这种微妙的扭曲通常很难是可感知的,但它们成功地欺骗了机器学习模型。在本文中,我们提出了一种策略,FeaturePro,用于防御对抗对抗示例的机器学习模型,并评估深度学习系统的安全性。我们通过减少对抗的可见功能空间来解决这一挑战。通过执行白盒攻击,黑匣子攻击,有针对性的攻击和非目标攻击,可以评估作为评估人工智能系统的重要指标的深度学习算法的安全性。我们分析了在对抗性培训组成时的泛化和鲁棒性。 Featurepro对具有高精度和低误率的对抗对抗攻击有效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号