首页> 外文会议>IEEE Security and Privacy Workshops >Minimum-Norm Adversarial Examples on KNN and KNN based Models
【24h】

Minimum-Norm Adversarial Examples on KNN and KNN based Models

机译:基于KNN和KNN模型的最小规范对抗例

获取原文

摘要

We study the robustness against adversarial examples of kNN classifiers and classifiers that combine kNN with neural networks. The main difficulty lies in the fact that finding an optimal attack on kNN is intractable for typical datasets. In this work, we propose a gradient-based attack on kNN and kNN-based defenses, inspired by the previous work by Sitawarin & Wagner [1]. We demonstrate that our attack outperforms their method on all of the models we tested with only a minimal increase in the computation time. The attack also beats the state-of-the-art attack [2] on kNN when $k > 1$ using less than 1% of its running time. We hope that this attack can be used as a new baseline for evaluating the robustness of kNN and its variants.
机译:我们研究了与神经网络结合KNN的kNN分类器和分类器的对抗的鲁棒性。主要困难在于发现对KNN的最佳攻击是典型数据集的棘手。在这项工作中,我们提出了基于梯度的攻击,以knn和基于knn的防御,由Sitawarin&Wagner [1]的前一项工作受到启发。我们展示我们的攻击在我们测试的所有模型上表现出他们在计算时间的最小增加的所有模型上表达了它们的方法。攻击还击败了knn时的最先进的攻击[2] $ k> 1 $ 使用不到1%的运行时间。我们希望这种攻击可以用作评估KNN的稳健性的新基线及其变体。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号