首页> 外文期刊>SIGKDD explorations >A Distributed Quasi-Newton Algorithm for Empirical Risk Minimization with Nonsmooth Regularization
【24h】

A Distributed Quasi-Newton Algorithm for Empirical Risk Minimization with Nonsmooth Regularization

机译:具有非现实风险最小化的分布式拟牛顿算法,用非现实规范化

获取原文
获取原文并翻译 | 示例
           

摘要

We propose a communication- and computation-efficient distributed optimization algorithm using second-order information for solving ERM problems with a nonsmooth regularization term. Current second-order and quasi-Newton methods for this problem either do not work well in the distributed setting or work only for specific regularizers. Our algorithm uses successive quadratic approximations, and we describe how to maintain an approximation of the Hessian and solve subproblems efficiently in a distributed manner. The proposed method enjoys global linear convergence for a broad range of non-strongly convex problems that includes the most commonly used ERMs, thus requiring lower communication complexity. It also converges on non-convex problems, so has the potential to be used on applications such as deep learning. Initial computational results on convex problems demonstrate that our method significantly improves on communication cost and running time over the current state-of-the-art methods.
机译:我们使用二阶信息提出了一种通信和计算有效的分布式优化优化优化算法,用于解决非数正规术语的ERM问题。目前的二阶和Quasi-Newton方法对于此问题不适用于分布式设置或仅适用于特定的常规方法。我们的算法使用连续的二次近似,并且我们描述了如何以分布式方式有效地保持Hessian的近似值并有效地解决子问题。该方法享有全局线性收敛,用于广泛的非强烈凸面问题,包括最常用的ERMS,因此需要更低的通信复杂性。它还收敛于非凸面问题,因此有可能在深度学习等应用上使用。凸面问题的初始计算结果表明,我们的方法显着提高了当前最先进的方法的通信成本和运行时间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号