首页>
外国专利>
METHODS AND APPARATUSES FOR DEFENSE AGAINST ADVERSARIAL ATTACKS ON FEDERATED LEARNING SYSTEMS
METHODS AND APPARATUSES FOR DEFENSE AGAINST ADVERSARIAL ATTACKS ON FEDERATED LEARNING SYSTEMS
展开▼
机译:用于防御联邦学习系统的对抗攻击的方法和设备
展开▼
页面导航
摘要
著录项
相似文献
摘要
Methods and computing apparatuses for defending against model poisoning attacks in federated learning are described. One or more updates are obtained, where each update represents a respective difference between parameters (e.g. weights) of the global model and parameters (e.g. weights) of a respective local model. Random noise perturbation and normalization are applied to each update, to obtain one or more perturbed and normalized updates. The parameters (e.g. weights) of the global model are updated by adding an aggregation of the one or more perturbed and normalized updates to the parameters (e.g. weights) of the global model. And, one or more learned parameters (e.g. weights) of the previous global model are also perturbed using random noise.
展开▼