首页> 外文会议>European conference on machine learning and principles and practice of knowledge discovery in databases >Distributed Stochastic Optimization of Regularized Risk via Saddle-Point Problem
【24h】

Distributed Stochastic Optimization of Regularized Risk via Saddle-Point Problem

机译:基于鞍点问题的正则化风险分布随机优化

获取原文

摘要

Many machine learning algorithms minimize a regularized risk, and stochastic optimization is widely used for this task. When working with massive data, it is desirable to perform stochastic optimization in parallel. Unfortunately, many existing stochastic optimization algorithms cannot be parallelized efficiently. In this paper we show that one can rewrite the regularized risk minimization problem as an equivalent saddle-point problem, and propose an efficient distributed stochastic optimization (DSO) algorithm. We prove the algorithm's rate of convergence; remarkably, our analysis shows that the algorithm scales almost linearly with the number of processors. We also verify with empirical evaluations that the proposed algorithm is competitive with other parallel, general purpose stochastic and batch optimization algorithms for regularized risk minimization.
机译:许多机器学习算法将正则化的风险降到最低,随机优化被广泛用于此任务。处理海量数据时,希望并行执行随机优化。不幸的是,许多现有的随机优化算法无法有效地并行化。在本文中,我们表明可以将正则化的风险最小化问题重写为等效的鞍点问题,并提出一种有效的分布式随机优化(DSO)算法。我们证明了算法的收敛速度;值得注意的是,我们的分析表明,该算法几乎与处理器数量成线性比例关系。我们还通过经验评估验证了该算法与其他并行,通用随机和批次优化算法在正则化风险最小化方面具有竞争力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号