...
首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Parallel Coordinate Descent Newton Method for Efficient
【24h】

Parallel Coordinate Descent Newton Method for Efficient

机译:有效的平行坐标下降牛顿法

获取原文
获取原文并翻译 | 示例

摘要

The recent years have witnessed advances in parallel algorithms for large-scale optimization problems. Notwithstanding the demonstrated success, existing algorithms that parallelize over features are usually limited by divergence issues under high parallelism or require data preprocessing to alleviate these problems. In this paper, we propose a Parallel Coordinate Descent algorithm using approximate Newton steps (PCDN) that is guaranteed to converge globally without data preprocessing. The key component of the PCDN algorithm is the high-dimensional line search, which guarantees the global convergence with high parallelism. The PCDN algorithm randomly partitions the feature set into b subsets/bundles of size P , and sequentially processes each bundle by first computing the descent directions for each feature in parallel and then conducting P -dimensional line search to compute the step size. We show that: 1) the PCDN algorithm is guaranteed to converge globally despite increasing parallelism and 2) the PCDN algorithm converges to the specified accuracy epsilon within the limited iteration number of T_epsilon , and T_epsilon decreases with increasing parallelism. In addition, the data transfer and synchronization cost of the P -dimensional line search can be minimized by maintaining intermediate quantities. For concreteness, the proposed PCDN algorithm is applied to L_{1} -regularized logistic regression and L_{1} -regularized L_{2} -loss support vector machine problems. Experimental evaluations on seven benchmark data sets show that the PCDN algorithm exploits parallelism well and outperforms the state-of-the-art methods.
机译:近年来,并行算法在解决大规模优化问题方面取得了进步。尽管已经证明了成功,但在高并行度下,现有的在特征上进行并行化的算法通常会受到发散问题的限制,或者需要进行数据预处理以缓解这些问题。在本文中,我们提出了一种使用近似牛顿步长(PCDN)的并行坐标下降算法,该算法可确保在不进行数据预处理的情况下进行全局收敛。 PCDN算法的关键部分是高维线搜索,它以高并行度保证了全局收敛。 PCDN算法将特征集随机分为大小为P的b个子集/束,并通过首先并行计算每个特征的下降方向,然后进行P维线搜索以计算步长,来顺序处理每个束。我们证明:1)尽管并行度增加,但PCDN算法仍可保证全局收敛; 2)PCDN算法在T_epsilon的有限迭代次数内收敛到指定的精度 epsilon,并且T_epsilon随着并行度的提高而减小。另外,可以通过保持中间数量来最小化P维线搜索的数据传输和同步成本。具体而言,将所提出的PCDN算法应用于L_ {1}正则化logistic回归和L_ {1}正则化L_ {2}损失支持向量机问题。对七个基准数据集的实验评估表明,PCDN算法能够很好地利用并行性,并且性能优于最新方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号