...
首页> 外文期刊>Automatica >Optimal distributed stochastic mirror descent for strongly convex optimization
【24h】

Optimal distributed stochastic mirror descent for strongly convex optimization

机译:最佳分布式随机镜下降,用于强凸优化

获取原文
获取原文并翻译 | 示例

摘要

In this paper we consider convergence rate problems for stochastic strongly-convex optimization in the non-Euclidean sense with a constraint set over a time-varying multi-agent network. We propose two efficient non-Euclidean stochastic subgradient descent algorithms based on the Bregman divergence as distance-measuring function rather than the Euclidean distances that were employed by the standard distributed stochastic projected subgradient algorithms. For distributed optimization of non-smooth and strongly convex functions whose only stochastic subgradients are available, the first algorithm recovers the best previous known rate of O(ln(T)/T) (where T is the total number of iterations). The second algorithm is an epoch variant of the first algorithm that attains the optimal convergence rate of O(1/T), matching that of the best previously known centralized stochastic subgradient algorithm. Finally, we report some simulation results to illustrate the proposed algorithms. (C) 2018 Elsevier Ltd. All rights reserved.
机译:在本文中,我们考虑了在一个时变多代理网络上的约束中的非欧洲意义上的随机强凸优化的收敛速度问题。我们提出了基于Bregman发散的两种有效的非欧几里德随机子分析性缩减算法,作为距离测量功能,而不是标准分布式随机投影的子校验算法采用的欧几里德距离。对于仅具有随机子降级可用的非平滑且强凸函数的分布式优化,第一算法恢复最佳先前已知的O(ln(t)/ t)的速率(其中t是迭代的总数)。第二算法是第一算法的epoch变体,其达到O(1 / t)的最佳收敛速率,匹配最佳先前已知的集中式随机子播放算法的算法。最后,我们报告了一些模拟结果来说明所提出的算法。 (c)2018年elestvier有限公司保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号