首页> 外文期刊>ACM transactions on reconfigurable technology and systems >ReDCrypt: Real-Time Privacy-Preserving Deep Learning Inference in Clouds Using FPGAs
【24h】

ReDCrypt: Real-Time Privacy-Preserving Deep Learning Inference in Clouds Using FPGAs

机译:ReDCrypt:使用FPGA在云中实时保留隐私保护的深度学习推理

获取原文
获取原文并翻译 | 示例

摘要

Artificial Intelligence (AI) is increasingly incorporated into the cloud business in order to improve the functionality (e.g., accuracy) of the service. The adoption of AI as a cloud service raises serious privacy concerns in applications where the risk of data leakage is not acceptable. Examples of such applications include scenarios where clients hold potentially sensitive private information such as medical records, financial data, and/or location. This article proposes ReDCrypt, the first reconfigurable hardware-accelerated framework that empowers privacy-preserving inference of deep learning models in cloud servers. ReDCrypt is well-suited for streaming (a.k.a., real-time AI) settings where clients need to dynamically analyze their data as it is collected over time without having to queue the samples to meet a certain batch size. Unlike prior work, ReDCrypt neither requires to change how AI models are trained nor relies on two non-colluding servers to perform. The privacy-preserving computation in ReDCrypt is executed using Yao's Garbled Circuit (GC) protocol We break down the deep learning inference task into two phases: (i) privacy-insensitive (local) computation, and (ii) privacy-sensitive (interactive) computation. We devise a high-throughput and power-efficient implementation of GC protocol on FPGA for the privacy-sensitive phase. ReDCrypt's accompanying API provides support for seamless integration of ReDCrypt into any deep learning framework. Proof-of-concept evaluations for different DL applications demonstrate up to 57-fold higher throughput per core compared to the best prior solution with no drop in the accuracy.
机译:人工智能(AI)越来越多地并入云业务中,以改善服务的功能(例如准确性)。将AI用作云服务会在无法接受数据泄漏风险的应用程序中引起严重的隐私问题。此类应用程序的示例包括客户拥有潜在敏感的私人信息(例如病历,财务数据和/或位置)的场景。本文提出了ReDCrypt,这是第一个可重新配置的硬件加速框架,该框架可增强云服务器中深度学习模型的隐私保护推理能力。 ReDCrypt非常适合流式设置(也称为实时AI),在这种设置中,客户需要随着时间的推移动态分析其数据,而不必将样本排队以满足一定的批次大小。与以前的工作不同,ReDCrypt既不需要更改AI模型的训练方式,也不需要依靠两个非竞争服务器来执行。 ReDCrypt中的隐私保护计算是使用Yao的Garbled Circuit(GC)协议执行的。我们将深度学习推理任务分为两个阶段:(i)不敏感隐私(本地)计算和(ii)不敏感隐私(交互)计算。我们为隐私敏感的阶段设计了FPGA上的GC协议的高吞吐量和高能效实现。 ReDCrypt随附的API为将ReDCrypt无缝集成到任何深度学习框架中提供了支持。与最佳的现有解决方案相比,针对不同DL应用的概念验证评估表明,每个内核的吞吐量最高可提高57倍,而准确性没有下降。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号