首页> 外文会议>International Conference on Computational Intelligence in Security for Information Systems >Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment
【24h】

Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment

机译:对抗受到动态风险评估的对抗例子的深度学习防御

获取原文

摘要

Deep Neural Networks were first developed decades ago, but it was not until recently that they started being extensively used, due to their computing power requirements. Since then, they are increasingly being applied to many fields and have undergone far-reaching advancements. More importantly, they have been utilized for critical matters, such as making decisions in healthcare procedures or autonomous driving, where risk management is crucial. Any mistakes in the diagnostics or decision-making in these fields could entail grave accidents, and even death. This is preoccupying, because it has been repeatedly reported that it is straightforward to attack this type of models. Thus, these attacks must be studied to be able to assess their risk, and defenses need to be developed to make models more robust. For this work, the most widely known attack was selected (adversarial attack) and several defenses were implemented against it (i.e. adversarial training, dimensionality reduction and prediction similarity). Dimensionality reduction and prediction similarity were the proposed defenses, while the adversarial training defense was implemented only to compare with the proposed defenses. The obtained outcomes make the model more robust while keeping a similar accuracy. The new defenses have been developed using a breast cancer dataset and a VGG16 and dense neural network model, but the solutions could be applied to datasets from other areas and different convolutional and dense deep neural network models.
机译:深度神经网络是几十年前发达的,但直到最近他们开始广泛使用,因为他们的计算功率要求。从那时起,他们越来越多地应用于许多领域并经历了深远的进步。更重要的是,它们已被用于关键问题,例如在风险管理至关重要的地方制定医疗保健程序或自主驾驶中的决策。这些领域的诊断或决策中的任何错误都可能导致严重的事故,甚至死亡。这是专注的,因为它已经反复报道,攻击这种类型的模型很简单。因此,必须研究这些攻击,以便能够评估其风险,并且需要开发防御以使模型更加强大。对于这项工作,选择了最广泛的攻击(对抗性攻击),并对其中实施了几种防御(即对抗性培训,维度减少和预测相似性)。减少和预测相似性的维度是拟议的防御,而对普发的训练辩护仅实施,只能与拟议的抗辩进行比较。所获得的结果使模型更加强大,同时保持类似的准确性。使用乳腺癌数据集和VGG16和密集的神经网络模型开发了新的防御,但解决方案可以应用于来自其他区域的数据集和不同的卷积和密集的深度神经网络模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号