首页> 外文会议>European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases >Distributed Stochastic Optimization of Regularized Risk via Saddle-Point Problem
【24h】

Distributed Stochastic Optimization of Regularized Risk via Saddle-Point Problem

机译:通过马鞍点问题分布式随机优化正规风险

获取原文

摘要

Many machine learning algorithms minimize a regularized risk, and stochastic optimization is widely used for this task. When working with massive data, it is desirable to perform stochastic optimization in parallel. Unfortunately, many existing stochastic optimization algorithms cannot be parallelized efficiently. In this paper we show that one can rewrite the regularized risk minimization problem as an equivalent saddle-point problem, and propose an efficient distributed stochastic optimization (DSO) algorithm. We prove the algorithm's rate of convergence; remarkably, our analysis shows that the algorithm scales almost linearly with the number of processors. We also verify with empirical evaluations that the proposed algorithm is competitive with other parallel, general purpose stochastic and batch optimization algorithms for regularized risk minimization.
机译:许多机器学习算法最小化了正常风险,随机优化广泛用于此任务。当使用大规模数据时,希望并行执行随机优化。不幸的是,许多现有的随机优化算法不能有效地平行化。在本文中,我们表明,可以将正则化风险最小化问题作为等效的鞍点问题重写,并提出了一种有效的分布式随机优化(DSO)算法。我们证明了算法的收敛速度;值得注意的是,我们的分析表明,算法几乎线性地缩放了处理器的数量。我们还通过实证评估验证,该算法与其他平行,通用随机和批量优化算法具有竞争力,用于定期风险最小化。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号