首页> 外文期刊>IEEE Transactions on Computers >Enabling Secure NVM-Based in-Memory Neural Network Computing by Sparse Fast Gradient Encryption
【24h】

Enabling Secure NVM-Based in-Memory Neural Network Computing by Sparse Fast Gradient Encryption

机译:通过稀疏快速梯度加密启用基于NVM的内存神经网络计算

获取原文
获取原文并翻译 | 示例

摘要

Neural network (NN) computing is energy-consuming on traditional computing systems, owing to the inherent memory wall bottleneck of the von Neumann architecture and the Moore's Law being approaching the end. Non-volatile memories (NVMs) have been demonstrated as promising alternatives for constructing computing-in-memory (CIM) systems to accelerate NN computing. However, NVM-based NN computing systems are vulnerable to the confidentiality attacks because the weight parameters persist in memory when the system is powered off, enabling an adversary with physical access to extract the well-trained NN models. The goal of this article is to find a solution for thwarting the confidentiality attacks. We define and model the weight encryption problem. Then we propose an effective framework, containing a sparse fast gradient encryption (SFGE) method and a runtime encryption scheduling (RES) scheme, to guarantee the confidentiality security of NN models with a negligible performance overhead. Moreover, we improve the SFGE method by incrementally generating the encryption keys. Additionally, we provide variants of the encryption method to better fit quantized models and various mapping strategies. The experiments demonstrate that only encrypting an extremely small proportion of the weights (e.g., 20 weights per layer in ResNet-101), the NN models can be strictly protected.
机译:神经网络(NN)计算是对传统计算系统的能耗,由于冯Neumann架构的固有内存墙瓶颈和摩尔定律即将到来。已经证明了非易失性存储器(NVMS)作为构建计算内存(CIM)系统以加速NN计算的有前途的替代方案。然而,基于NVM的NN计算系统容易受到机密性攻击的影响,因为当系统关闭时,重量参数持续存在于存储器中,使对手具有体力访问来提取训练有素的NN模型。本文的目标是找到挫败机密性攻击的解决方案。我们定义并模拟体重加密问题。然后,我们提出了一种有效的框架,包含稀疏快速梯度加密(SFGE)方法和运行时加密调度(RES)方案,以保证NN模型的机密性安全性,具有可忽略不计的性能开销。此外,我们通过逐步生成加密密钥来改善SFGE方法。此外,我们提供加密方法的变体,以更好地拟合量化模型和各种映射策略。实验表明,只有加密极小的重量比例(例如,在RESET-101中每层20重量),可以严格保护NN模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号