首页> 外文会议>IEEE International Conference on Artificial Intelligence Circuits and Systems >Federated Regularization Learning: an Accurate and Safe Method for Federated Learning
【24h】

Federated Regularization Learning: an Accurate and Safe Method for Federated Learning

机译:联邦正规学习:一种准确和安全的联合学习方法

获取原文

摘要

Distributed machine learning (ML) and other related techniques such as federated learning are facing a high risk of information leakage. Differential privacy (DP) is commonly used to protect privacy. However, it suffers from low accuracy due to the unbalanced data distribution in federated learning and additional noise brought by DP itself. In this paper, we propose a novel federated learning model that can protect data privacy from the gradient leakage attack and black-box membership inference attack (MIA). The proposed protection scheme makes the data hard to be reproduced and be distinguished from predictions. A small simulated attacker network is embedded as a regularization punishment to defend the malicious attacks. We further introduce a gradient modification method to secure the weight information and remedy the additional accuracy loss. The proposed privacy protection scheme is evaluated on MNIST and CIFAR-10, and compared with state-of-the-art DP-based federated learning models. Experimental results demonstrate that our model can successfully defend diverse external attacks to user-level privacy with negligible accuracy loss.
机译:分布式机器学习(ml)和其他相关技术,例如联合学习面临着高风险的信息泄漏。差异隐私(DP)通常用于保护隐私。然而,由于联合学习中的不平衡数据分布以及DP本身带来的额外噪声,它受到低精度。在本文中,我们提出了一种新的联合学习模型,可以保护数据隐私免受梯度泄漏攻击和黑匣子会员推论攻击(MIA)。拟议的保护方案使得难以复制的数据并与预测区分开。一个小型模拟攻击者网络被嵌入为正规化惩罚,以捍卫恶意攻击。我们进一步介绍了一种梯度修改方法,以确保权重信息和补救额外的精度损耗。建议的隐私保护计划在Mnist和CiFar-10上进行评估,并与最先进的基于DP的联合学习模型进行比较。实验结果表明,我们的模型可以成功地防御不同的外部攻击,以具有可忽略的准确性损失。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号