首页> 外文期刊>IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems >Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural Networks
【24h】

Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural Networks

机译:稀疏转变对抗性:深度神经网络的能量和延迟攻击

获取原文
获取原文并翻译 | 示例

摘要

Adversarial attacks have exposed serious vulnerabilities in deep neural networks (DNNs), causing misclassifications through human-imperceptible perturbations to DNN inputs. We explore a new direction in the field of adversarial attacks by suggesting attacks that aim to degrade the energy or latency of DNNs rather than their classification accuracy. As a specific embodiment of this new threat vector, we propose and demonstrate adversarial sparsity attacks, which modify a DNN's inputs so as to reduce sparsity (or the incidence of zeros) in its internal activation values. Exploiting sparsity in hardware and software has emerged as a popular approach to improve DNN efficiency in resource-constrained systems. The proposed attack, therefore, increases the execution time and energy consumption of sparsity-optimized DNN implementations, raising concern over their deployment in latency and energy-critical applications. We propose a systematic methodology to generate adversarial inputs for sparsity attacks by formulating an objective function that quantifies the network's activation sparsity and minimizing this function using iterative gradient-descent techniques. To prevent easy detection of the attack, we further ensure that the perturbation magnitude is within a specified constraint and that the perturbation does not affect classification accuracy. We launch both white-box and black-box versions of adversarial sparsity attacks on image recognition DNNs and demonstrate that they decrease activation sparsity by 1.16x-1.82x. On a sparsity-optimized DNN accelerator, the attack results in degradations of 1.12x-1.59x in latency and 1.18x-1.99x in energy-delay product (EDP). Additionally, we analyze the impact of various hyperparameters and constraints on the attack's efficacy. Finally, we evaluate defense techniques, such as activation thresholding and input quantization and demonstrate that the proposed attack is able to withstand them, highlighting the need for further efforts in this new direction within the field of adversarial machine learning.
机译:对抗性攻击暴露于深度神经网络(DNN)中的严重漏洞,通过对DNN输入的人类难以察觉的扰动来导致错误分类。我们通过建议旨在降低DNN的能量或潜伏期而不是其分类准确性的攻击来探索对抗攻击领域的新方向。作为这种新威胁载体的一个具体实施方案,我们提出并证明了对抗的稀疏性攻击,其修改了DNN的输入,以便在其内部激活值中减少稀疏性(或零的发生率)。利用硬件和软件的稀疏性已成为一种流行的资源受限系统中提高DNN效率的流行方法。因此,提出了攻击的攻击,提高了稀疏性优化的DNN实现的执行时间和能耗,提高了对延迟和能量关键应用中的部署的关注。我们提出了一种系统的方法,通过制定一种客观函数来产生稀疏性攻击的对抗性输入,该函数通过迭代梯度缩小技术使网络的激活稀疏性和最小化该功能最小化。为了防止易于检测攻击,我们进一步确保扰动幅度在指定的约束范围内,并且扰动不会影响分类准确性。我们在图像识别DNN上发射白盒和黑匣子版本的对抗性稀疏攻击,并证明它们将激活稀疏性降低1.16x-1.82x。在稀疏优化的DNN加速器上,攻击导致延迟中1.12倍-1.59x的降解和1.18x-1.99x在能量延迟产品(EDP)中。此外,我们还分析了各种普遍参数和限制对攻击的疗效的影响。最后,我们评估防御技术,例如激活阈值和输入量化,并证明所提出的攻击能够承受它们,突出了在对抗机器学习领域内的这种新方向进一步努力的需求。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号