首页> 外文期刊>Neurocomputing >Data driven exploratory attacks on black box classifiers in adversarial domains
【24h】

Data driven exploratory attacks on black box classifiers in adversarial domains

机译:对抗领域中黑盒分类器的数据驱动探索性​​攻击

获取原文
获取原文并翻译 | 示例

摘要

While modern day web applications aim to create impact at the civilization level, they have become vulnerable to adversarial activity, where the next cyber-attack can take any shape and can originate from anywhere. The increasing scale and sophistication of attacks, has prompted the need for a data driven solution, with machine learning forming the core of many cybersecurity systems. Machine learning was not designed with security in mind and the essential assumption of stationarity, requiring that the training and testing data follow similar distributions, is violated in an adversarial domain. In this paper, an adversary's view point of a classification based system, is presented. Based on a formal adversarial model, the Seed-Explore-Exploit framework is presented, for simulating the generation of data driven and reverse engineering attacks on classifiers. Experimental evaluation, on 10 real world datasets and using the Google Cloud Prediction Platform, demonstrates the innate vulnerability of classifiers and the ease with which evasion can be carried out, without any explicit information about the classifier type, the training data or the application domain. The proposed framework, algorithms and empirical evaluation, serve as a white hat analysis of the vulnerabilities, and aim to foster the development of secure machine learning frameworks. (C) 2018 Elsevier B.V. All rights reserved.
机译:尽管现代网络应用程序旨在在文明水平上产生影响,但它们已经变得容易受到对抗活动的影响,在这种情况下,下一次网络攻击可能会发生任何形式,并且可能起源于任何地方。攻击的规模和复杂程度不断提高,促使人们需要数据驱动解决方案,而机器学习已成为许多网络安全系统的核心。机器学习在设计时并未考虑安全性,并且在对抗领域违反了平稳性的基本假设,即要求训练和测试数据遵循相似的分布。在本文中,提出了基于分类系统的对手观点。基于正式的对抗模型,提出了Seed-Explore-Exploit框架,用于模拟分类器上数据驱动和反向工程攻击的生成。在10个现实世界数据集上并使用Google Cloud Prediction Platform进行的实验评估,证明了分类器的先天漏洞以及无需进行有关分类器类型,训练数据或应用程序域的任何明确信息即可进行回避的难易程度。提出的框架,算法和经验评估可作为对该漏洞的白帽子分析,并旨在促进安全机器学习框架的发展。 (C)2018 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号