首页> 外文期刊>Computers & Security >Digestive neural networks: A novel defense strategy against inference attacks in federated learning
【24h】

Digestive neural networks: A novel defense strategy against inference attacks in federated learning

机译:消化神经网络:联合学习推论攻击的新型防御策略

获取原文
获取原文并翻译 | 示例
       

摘要

Federated Learning (FL) is an efficient and secure machine learning technique designed for decentralized computing systems such as fog and edge computing. Its learning process employs frequent communications as the participating local devices send updates, either gradients or parameters of their models, to a central server that aggregates them and redistributes new weights to the devices. In FL, private data does not leave the individual local devices, and thus, rendered as a robust solution in terms of privacy preservation. However, the recently introduced membership inference attacks pose a critical threat to the impeccability of FL mechanisms. By eavesdropping only on the updates transferring to the center server, these attacks can recover the private data of a local device. A prevalent solution against such attacks is the differential privacy scheme that augments a sufficient amount of noise to each update to hinder the recovering process. However, it suffers from a significant sacrifice in the classification accuracy of the FL. To effectively alleviate the problem, this paper proposes a Digestive Neural Network (DNN), an independent neural network attached to the FL. The private data owned by each device will pass through the DNN and then train the FL. The DNN modifies the input data, which results in distorting updates, in a way to maximize the classification accuracy of FL while the accuracy of inference attacks is minimized. Our simulation result shows that the proposed DNN shows significant performance on both gradient sharing- and weight sharing-based FL mechanisms. For the gradient sharing, the DNN achieved higher classification accuracy by 16.17% while 9% lower attack accuracy than the existing differential privacy schemes. For the weight sharing FL scheme, the DNN achieved at most 46.68% lower attack success rate with 3% higher classification accuracy.
机译:联合学习(FL)是一种高效而安全的机器学习技术,专为雾和边缘计算而设计的分散计算系统。其学习过程使用频繁的通信作为参与本地设备将更新,梯度或其模型的参数发送到聚合它们的中央服务器并将新权重放到设备中的中央服务器。在FL中,私有数据不会留下各个本地设备,因此,在隐私保存方面呈现为强大的解决方案。然而,最近引入的会员推理攻击对流动机制的无可挑剔构成了关键威胁。只有在传输到中心服务器的更新上窃听,这些攻击可以恢复本地设备的私有数据。对这种攻击的普遍存在解决方案是差分隐私方案,其增加了足够量的噪声,以妨碍恢复过程。然而,它在FL的分类准确性中受到显着的牺牲。为了有效缓解该问题,本文提出了一种消化神经网络(DNN),是附着在FL的独立神经网络。每个设备拥有的私有数据将通过DNN,然后培训FL。 DNN修改了输入数据,这导致更新的扭曲,以便最大化FL的分类精度,而推理攻击的准确性最小化。我们的仿真结果表明,所提出的DNN在梯度共享和重量共享的流动机制上显示出显着性能。对于梯度共享,DNN的分类精度较高16.17%,而攻击精度降低了9%,而不是现有的差异隐私方案。对于重量分配FL方案,DNN最多可达46.68%的攻击成功率,分类准确度较高3%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号