首页> 外文会议>IEEE Data Science Workshop >Byzantine-Robust Stochastic Gradient Descent for Distributed Low-Rank Matrix Completion
【24h】

Byzantine-Robust Stochastic Gradient Descent for Distributed Low-Rank Matrix Completion

机译:拜占亮 - 强大的随机梯度下降,用于分布式低级矩阵完成

获取原文

摘要

To overcome the growing privacy concerns of centralized machine learning, federated learning has been proposed to enable collaboratively training a model with data stored locally in the owners' devices. However, adversarial attacks (e.g., Byzantine attacks in the worst case) still exist in the federated learning systems so that the information shared by the data owners are unreliable. Byzantine-robust aggregation methods, such as median, geometric median and Krum, have been found to perform well in eliminating the negative effects caused by the Byzantine attacks. In this paper, we study the distributed low-rank matrix completion problem in a federated learning setting, where some data owners are malicious. We combine the Byzantine-robust aggregation rules with stochastic gradient descent (SGD) to solve this problem. Numerical experiments on the Netflix dataset demonstrate that the proposed methods are able to achieve comparable performance relative to SGD without attacks.
机译:为了克服集中机器学习的日益增长的隐私问题,已经提出联合学习来启用与在业主设备中本地存储的数据进行协同培训模型。然而,对抗性攻击(例如,最坏情况下的拜占庭攻击)仍然存在于联邦学习系统中,以便数据所有者共享的信息不可靠。已经发现拜占庭鲁棒的聚集方法,例如中位数,几何中间中值和克鲁姆,在消除拜占庭攻击造成的负面影响方面表现良好。在本文中,我们研究了联合学习设置中的分布式低级矩阵完成问题,其中一些数据所有者是恶意的。我们将拜占庭式强大的聚合规则与随机梯度下降(SGD)结合起来以解决这个问题。 Netflix数据集上的数值实验表明,所提出的方法能够在没有攻击的情况下实现相对于SGD的可比性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号