首页> 外文会议>International Workshop on Self-Organizing Maps >Robustness of Generalized Learning Vector Quantization Models Against Adversarial Attacks
【24h】

Robustness of Generalized Learning Vector Quantization Models Against Adversarial Attacks

机译:逆势攻击广义学习矢量量化模型的鲁棒性

获取原文

摘要

Adversarial attacks and the development of (deep) neural networks robust against them are currently two widely researched topics. The robustness of Learning Vector Quantization (LVQ) models against adversarial attacks has however not yet been studied to the same extent. We therefore present an extensive evaluation of three LVQ models: Generalized LVQ, Generalized Matrix LVQ and Generalized Tangent LVQ. The evaluation suggests that both Generalized LVQ and Generalized Tangent LVQ have a high base robustness, on par with the current state-of-the-art in robust neural network methods. In contrast to this, Generalized Matrix LVQ shows a high susceptibility to adversarial attacks, scoring consistently behind all other models. Additionally, our numerical evaluation indicates that increasing the number of prototypes per class improves the robustness of the models.
机译:对抗的攻击和(深)神经网络的发展稳健,目前是两个众所周地研究的主题。然而,尚未在相同的程度上进行学习矢量量化(LVQ)模型的鲁棒性。然而,尚未研究相同的程度。因此,我们对三个LVQ模型提供了广泛的评估:广义LVQ,广义矩阵LVQ和广义切线LVQ。评估表明,广义的LVQ和广义正角LVQ具有高基础鲁棒性,涉及目前最先进的鲁棒神经网络方法。与此相反,广义矩阵LVQ显示对对抗性攻击的高易感性,始终如一地划分所有其他模型。此外,我们的数值评估表明,增加每个类的原型数量提高了模型的稳健性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号