首页> 外文会议>International Symposium on Advanced Electrical and Communication Technologies >Evaluation and Analysis of Robustness of Adversarial Examples Attacks in Deep Neural Networks
【24h】

Evaluation and Analysis of Robustness of Adversarial Examples Attacks in Deep Neural Networks

机译:深度神经网络对抗性实例攻击鲁棒性的评价与分析

获取原文

摘要

Neural networks have revolutionized the field of artificial intelligence. They have given rise to several applications in various areas. However, it has been shown that they have flaws that could be leveraged by an attacker to perform an adversarial examples attack. Several studies have attempted to design defensive mechanisms that would ensure the security of neural networks. Nevertheless, these mitigation techniques remain insufficient to entirely address all the vulnerabilities that may reside in these architectures. In this article, we will extensively address the security of neural networks. We will begin with a study of the different approaches towards the security of neural networks. Then, we will detail significant sources of vulnerabilities in neural networks. Afterward, we will examine the theory of adversarial examples attack as well as the optimization problem related to them. Thereafter, we will conduct a comparative study of the most common mitigation techniques in the literature. Finally, we will propose a framework devoted to assessing adversarial examples robustness.
机译:神经网络彻底改变了人工智能领域。它们在各个领域施加了几种应用。然而,已经表明,它们具有缺陷,可以通过攻击者利用来执行对抗性示例攻击。几项研究试图设计确保神经网络安全的防御机制。尽管如此,这些缓解技术仍然不足以完全解决可能居住在这些架构中的所有漏洞。在本文中,我们将广泛解决神经网络的安全性。我们将首先研究了对神经网络安全的不同方法。然后,我们将详细说明神经网络中的重要漏洞。之后,我们将研究对抗性示例攻击的理论以及与他们相关的优化问题。此后,我们将对文献中最常见的缓解技术进行比较研究。最后,我们将提出一个致力于评估对抗性实例的鲁棒性的框架。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号