首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates
【24h】

Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates

机译:拜占庭-鲁布特分布式学习:寻求最佳统计率

获取原文
           

摘要

In this paper, we develop distributed optimization algorithms that are provably robust against Byzantine failures—arbitrary and potentially adversarial behavior, in distributed computing systems, with a focus on achieving optimal statistical performance. A main result of this work is a sharp analysis of two robust distributed gradient descent algorithms based on median and trimmed mean operations, respectively. We prove statistical error rates for all of strongly convex, non-strongly convex, and smooth non-convex population loss functions. In particular, these algorithms are shown to achieve order-optimal statistical error rates for strongly convex losses. To achieve better communication efficiency, we further propose a median-based distributed algorithm that is provably robust, and uses only one communication round. For strongly convex quadratic loss, we show that this algorithm achieves the same optimal error rate as the robust distributed gradient descent algorithms.
机译:在本文中,我们开发了分布式优化算法,该算法在分布式计算系统中针对拜占庭式故障(任意和潜在的对抗行为)具有可证明的鲁棒性,重点是实现最佳统计性能。这项工作的主要结果是对分别基于中值和修剪均值运算的两种鲁棒的分布式梯度下降算法进行了清晰的分析。我们证明了所有强凸,非强凸和平滑非凸总体损失函数的统计误差率。特别是,这些算法显示出针对强凸损失实现了阶数最优的统计错误率。为了获得更好的通信效率,我们进一步提出了一种基于中位数的分布式算法,该算法被证明是健壮的,并且仅使用一个回合。对于强凸二次损失,我们证明了该算法可实现与鲁棒分布梯度下降算法相同的最佳误码率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号