...
首页> 外文期刊>IFAC PapersOnLine >Improving Distributed Stochastic Gradient Descent Estimate via Loss Function Approximation
【24h】

Improving Distributed Stochastic Gradient Descent Estimate via Loss Function Approximation

机译:通过损失函数逼近改进分布式随机梯度下降估计

获取原文
           

摘要

Both problems of learning and optimization in context of huge amounts of data became very important nowadays. Unfortunately even online and parallel optimization methods may fail to handle such amounts of data and fit into time limits. For this case distributed optimization methods may be the only solution. In this paper we consider particular type of optimization problem in distributed setting. We propose an algorithm substantially based on the distributed stochastic gradient descent method proposed in Zinkevich et al. (2010). Finally we experimentally study properties of the proposed algorithm and demonstrate its superiority for particular type of optimization problem.
机译:如今,在海量数据中学习和优化这两个问题都变得非常重要。不幸的是,即使在线和并行优化方法也可能无法处理如此大量的数据,无法适应时间限制。对于这种情况,分布式优化方法可能是唯一的解决方案。在本文中,我们考虑了分布式设置中特定类型的优化问题。我们提出了一种基本上基于Zinkevich等人提出的分布式随机梯度下降方法的算法。 (2010)。最后,我们通过实验研究了所提出算法的性质,并证明了其对特定类型的优化问题的优越性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号