首页> 外文会议>IEEE International Conference on Communications >On Defensive Neural Networks Against Inference Attack in Federated Learning
【24h】

On Defensive Neural Networks Against Inference Attack in Federated Learning

机译:关于在联邦学习中的推理攻击防御性神经网络

获取原文

摘要

Federated Learning (FL) is a promising technique for edge computing environments as it provides better data privacy protection. It enables each edge node in the system to send a central server a computed value, named gradient, rather than sending raw data. However, recent research results show that the FL is still vulnerable to an inference attack, which is an adversarial algorithm that is capable of identifying the data used to compute the gradient. One prevalent mitigation strategy is differential privacy which computes a gradient with noised data, but this causes another problem that is accuracy degradation. To effectively deal with this problem, this paper proposes a new digestive neural network (DNN) and integrates it into FL. The proposed scheme distorts raw data by DNN to make it unrecognizable then computes a gradient by a classification network. The gradients generated by edge nodes will be sent to the server to complete a trained model. The simulation results show that the proposed scheme has 9.31% higher classification accuracy and 19.25% lower attack accuracy on average than the differential private schemes.
机译:联合学习(FL)是优先级计算环境的有希望的技术,因为它提供更好的数据隐私保护。它使系统中的每个边缘节点能够发送中央服务器,所计算的值,命名渐变,而不是发送原始数据。然而,最近的研究结果表明,FL仍然容易受到推理攻击的影响,这是能够识别用于计算梯度的数据的逆势算法。一个普遍的缓解策略是差异隐私,它计算出具有声音数据的渐变,但这导致另一个是准确性劣化的问题。为了有效处理这个问题,本文提出了一种新的消化神经网络(DNN)并将其整合到FL中。所提出的方案通过DNN扭曲了原始数据,以使其无法识别,然后通过分类网络计算梯度。边缘节点生成的渐变将被发送到服务器以完成培训的模型。仿真结果表明,该方案的分类精度高9.31%,平均攻击​​准确性降低了19.25%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号