首页> 外文会议>International Conference on Computational Data and Social Networks >Differential Privacy Approach to Solve Gradient Leakage Attack in a Federated Machine Learning Environment
【24h】

Differential Privacy Approach to Solve Gradient Leakage Attack in a Federated Machine Learning Environment

机译:差分隐私方法解决联邦机器学习环境中的梯度泄漏攻击

获取原文

摘要

The growth of federated machine learning in recent times has dramatically leveraged the traditional machine learning technique for intrusion detection. Keeping the dataset for training at decentralized nodes, federated machine learning have kept the people's data private; however, federated machine learning mechanism still suffers from gradient leakage attacks. Adversaries are now taking advantage of those gradients and can reconstruct the people's private data with greater accuracy. Adversaries are using these private network data later on to launch more devastating attacks against users. At this time, it becomes essential to develop a solution that prevents these attacks. This paper has introduced differential privacy, which uses Gaussian and Laplace mechanisms to secure updated gradients during the communication. Our result shows that clients can achieve a significant level of accuracy with differentially private gradients.
机译:近来联合机器学习的增长大大利用了传统的机器学习技术来入侵检测。 将数据集保留在分散节点的培训中,联合机器学习使人们的数据私有; 然而,联合机器学习机制仍然遭受梯度泄漏攻击。 对手现在正在利用这些渐变,可以以更高的准确性重建人民的私人数据。 稍后将使用这些私有网络数据来对用户推出更多的毁灭性攻击。 此时,开发防止这些攻击的解决方案变得重要。 本文引入了差异隐私,它使用高斯和拉普拉斯机制在通信期间确保更新的梯度。 我们的结果表明,客户可以通过差异私有梯度达到显着的准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号