首页> 外文会议>International Conference on Machine Learning for Cyber Security >A Poisoning Attack Against the Recognition Model Trained by the Data Augmentation Method
【24h】

A Poisoning Attack Against the Recognition Model Trained by the Data Augmentation Method

机译:针对数据增强方法训练的识别模型的中毒攻击

获取原文

摘要

The training model often preprocesses the training set with the data augmentation method. Aiming at this kind of training mode, a poisoning attack scheme is proposed in this paper, which can effectively complete the attack. For the traffic sign recognition system, its decision boundary is changed by the way of data poisoning, so that it would incorrectly recognize the target sample. In this scheme, a "backdoor" belonging to the attacker is added to the toxic sample so that the attacker can manipulate recognition model (i.e., the target sample is classified into expected categories). The attack is difficult to detect, because the victim will take a poison sample as a healthy one. The experimental results show that the scheme can successfully attack the model trained by the data augmentation method, realize the attack function against the selected target, and complete the attack with a high success rate. It is hoped that this work will raise awareness of the important issues of data reliability and data sources.
机译:培训模型经常预处理数据增强方法的培训。针对这种培训模式,本文提出了一种中毒攻击方案,可以有效地完成攻击。对于流量标志识别系统,其决策边界通过数据中毒方式改变,因此它会错误地识别目标样本。在该方案中,属于攻击者的“后门”被添加到毒性样本中,使得攻击者可以操纵识别模型(即,目标样本被分类为预期类别)。攻击难以检测,因为受害者将作为健康样品服用毒药样本。实验结果表明,该方案可以成功攻击由数据增强方法训练的模型,实现对所选目标的攻击功能,并以高成功率完成攻击。希望这项工作提高了对数据可靠性和数据源的重要问题的认识。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号