首页> 外文会议>IEEE Symposium on Security and Privacy >DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model
【24h】

DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model

机译:DEEPSEC:用于深度学习模型安全性分析的统一平台

获取原文

摘要

Deep learning (DL) models are inherently vulnerable to adversarial examples - maliciously crafted inputs to trigger target DL models to misbehave - which significantly hinders the application of DL in security-sensitive domains. Intensive research on adversarial learning has led to an arms race between adversaries and defenders. Such plethora of emerging attacks and defenses raise many questions: Which attacks are more evasive, preprocessing-proof, or transferable? Which defenses are more effective, utility-preserving, or general? Are ensembles of multiple defenses more robust than individuals? Yet, due to the lack of platforms for comprehensive evaluation on adversarial attacks and defenses, these critical questions remain largely unsolved. In this paper, we present the design, implementation, and evaluation of DEEPSEC, a uniform platform that aims to bridge this gap. In its current implementation, DEEPSEC incorporates 16 state-of-the-art attacks with 10 attack utility metrics, and 13 state-of-the-art defenses with 5 defensive utility metrics. To our best knowledge, DEEPSEC is the first platform that enables researchers and practitioners to (i) measure the vulnerability of DL models, (ii) evaluate the effectiveness of various attacks/defenses, and (iii) conduct comparative studies on attacks/defenses in a comprehensive and informative manner. Leveraging DEEPSEC, we systematically evaluate the existing adversarial attack and defense methods, and draw a set of key findings, which demonstrate DEEPSEC's rich functionality, such as (1) the trade-off between misclassification and imperceptibility is empirically confirmed; (2) most defenses that claim to be universally applicable can only defend against limited types of attacks under restricted settings; (3) it is not necessary that adversarial examples with higher perturbation magnitude are easier to be detected; (4) the ensemble of multiple defenses cannot improve the overall defense capability, but can improve the lower bound of the defense effectiveness of individuals. Extensive analysis on DEEPSEC demonstrates its capabilities and advantages as a benchmark platform which can benefit future adversarial learning research.
机译:深度学习(DL)模型天生就容易受到对抗性示例的攻击-恶意制作的输入会触发目标DL模型行为不当-这严重阻碍了DL在安全敏感领域的应用。对对抗性学习的深入研究导致了对抗者与防御者之间的军备竞赛。如此众多的新兴攻击和防御引发了许多问题:哪种攻击更易规避,更能防止预处理或转移?哪些防御方法更有效,可以保留实用程序或更一般?多重防御的合奏是否比个人更强大?但是,由于缺乏用于对付攻击和防御的全面评估的平台,这些关键问题仍未解决。在本文中,我们介绍了DEEPSEC的设计,实施和评估,DEEPSEC是旨在弥合这一差距的统一平台。在当前的实施中,DEEPSEC包含16种最新的攻击和10个攻击实用程序度量标准,以及13种最新的防御和5个防御实用程序度量标准。据我们所知,DEEPSEC是第一个使研究人员和从业人员能够(i)评估DL模型的脆弱性,(ii)评估各种攻击/防御的有效性,以及(iii)对攻击/防御进行比较研究的平台。全面而翔实的方式。利用DEEPSEC,我们系统地评估了现有的对抗性攻击和防御方法,并得出了一组关键结论,这些结果证明了DEEPSEC的丰富功能,例如(1)凭经验确定了错误分类和不可感知性之间的权衡; (2)大多数声称具有普遍适用性的防御措施只能防御受限环境下的有限类型的攻击; (3)没有必要更容易发现具有较高扰动幅度的对抗示例; (4)多重防御的结合不能提高整体防御能力,但可以提高个人防御效能的下限。对DEEPSEC的广泛分析证明了其作为基准平台的功能和优势,可以使未来的对抗学习研究受益。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号