首页> 中文期刊> 《中国计算机科学前沿:英文版》 >Defense against local model poisoning attacks to byzantine-robust federated learning

Defense against local model poisoning attacks to byzantine-robust federated learning

         

摘要

1 Introduction As a new mode of distributed learning,Federated Learning(FL)helps multiple organizations or clients to jointly train an artificial intelligence model without sharing their own datasets.Compared with the model trained by each client alone,a high-accuracy federated model can be obtained after multiple communication rounds in FL.Due to the characteristics of privacy protection and distributed learning,FL has been applied in many fields,such as the prognosis of pandemicdiseases,smartmanufacturing systems,etc.

著录项

相似文献

  • 中文文献
  • 外文文献
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号