首页> 外文会议>Annual American Control Conference >Byzantine-resilient distributed learning under constraints
【24h】

Byzantine-resilient distributed learning under constraints

机译:约束下的拜占庭式弹性分布式学习

获取原文

摘要

We consider a class of convex distributed statistical learning problems with inequality constraints in an adversarial scenario. At each iteration, an $lpha$-fraction of $m$ machines, which are supposed to compute stochastic gradients of the loss function and send them to a master machine, may act adversarially and send faulty gradients. To guard against defective information sharing, we develop a Byzantine primal-dual algorithm. For $lphain[0,0.5)$, we prove that after $T$ iterations the algorithm achieves $ilde{check{O}}(1/T+1/sqrt{mT}-+lpha/sqrt{T})$ statistical error bounds on both the optimality gap and the constraint violation. Our result holds for a class of normed vector spaces and, when specialized to the Euclidean space, it attains the optimal error bound for Byzantine stochastic gradient descent.
机译:我们考虑了一类凸起分布式统计学习问题,在对抗方案中的不平等约束。在每次迭代时,一个 $ alpha $ -fraction $ m $ 应计算损耗功能的随机梯度并将其发送到主机的机器,可以采取对外的梯度并发送错误的梯度。为了防止有缺陷的信息共享,我们开发了一个拜占庭式原始算法。为了 $ alpha 中[0 ,0.5)$ ,我们证明了之后 $ t $ 迭代算法达到 $ tilde { check { o}}(1 / t + 1 / sqrt {mt} - + alpha / sqrt {t})$ 统计误差在最优性差距和约束违规时界限。我们的结果持有一类规分的矢量空间,并且在专门用于欧几里德空间时,它达到拜占庭随机梯度下降的最佳误差。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号