首页> 外文会议>International Conference on Science of Cyber Security >Security Comparison of Machine Learning Models Facing Different Attack Targets
【24h】

Security Comparison of Machine Learning Models Facing Different Attack Targets

机译:不同攻击目标机器学习模型的安全比较

获取原文

摘要

Machine Learning has exhibited great performance in several practical application domains such as computer vision, natural language processing, automatic pilot and so on. As it becomes more and more widely used in practice, its security issues attracted more and more attentions. Previous research shows that machine learning models are very vulnerable when facing different kinds of adversarial attacks. Therefore, we need to evaluate the security of different machine learning models under different attacks. In this paper, we aim to provide a security comparison method for different machine learning models. We firstly classify the adversarial attacks into three classes by their attack targets, respectively attack on test data, attack on train data and attack on model parameters, and give subclasses under different assumptions. Then we consider support vector machine (SVM), neural networks with one hidden layer (NN), and convolution neural networks (CNN) as examples and launch different kinds of attacks on them for evaluating and comparing model securities. Additionally, our experiments illustrate the effects of concealing actions launched by the adversary.
机译:机器学习在几个实际应用领域中表现出具有很强的性能,例如计算机视觉,自然语言处理,自动飞行员等。由于它在实践中变得越来越广泛,其安全问题吸引了越来越多的注意。以前的研究表明,在面对不同种类的对抗性攻击时,机器学习模型非常脆弱。因此,我们需要在不同攻击下评估不同机器学习模型的安全性。在本文中,我们的目的是为不同机器学习模型提供安全比较方法。我们首先通过他们的攻击目标分别将对抗攻击分为三个类别,分别攻击测试数据,攻击列车数据和对模型参数的攻击,并在不同的假设下给子类。然后,我们考虑支持向量机(SVM),具有一个隐藏层(NN)的神经网络,以及卷积神经网络(CNN)作为示例,并对它们发射不同类型的攻击,以评估和比较模型证券。此外,我们的实验说明了隐瞒对手发起的行动的影响。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号