首页> 外文期刊>Cybernetics, IEEE Transactions on >Group-Based Alternating Direction Method of Multipliers for Distributed Linear Classification
【24h】

Group-Based Alternating Direction Method of Multipliers for Distributed Linear Classification

机译:分布式线性分类的乘子基于组的交替方向方法

获取原文
获取原文并翻译 | 示例

摘要

The alternating direction method of multipliers (ADMM) algorithm has been widely employed for distributed machine learning tasks. However, it suffers from several limitations, e.g., a relative low convergence speed, and an expensive time cost. To this end, in this paper, a novel method, namely the group-based ADMM (GADMM), is proposed for distributed linear classification. In particular, to accelerate the convergence speed and improve global consensus, a group layer is first utilized in GADMM to divide all the slave nodes into several groups. Then, all the local variables (from the slave nodes) are gathered in the group layer to generate different group variables. Finally, by using a weighted average method, the group variables are coordinated to update the global variable (from the master node) until the solution of the global problem is reached. According to the theoretical analysis, we found that: 1) GADMM can mathematically converge at the rate , where is the number of outer iterations and 2) by using the grouping methods, GADMM can improve the convergence speed compared with the distributed ADMM framework without grouping methods. Moreover, we systematically evaluate GADMM on four publicly available LIBSVM datasets. Compared with disADMM and stochastic dual coordinate ascent with alternating direction method of multipliers-ADMM, for distributed classification, GADMM is able to reduce the number of outer iterations, which leads to faster convergence speed and better global consensus. In particular, the statistical significance test has been experimentally conducted and the results validate that GADMM can significantly save up to 30% of the total time cost (with less than 0.6% accuracy loss) compared with disADMM on large-scale datasets, e.g., webspam and epsilon.
机译:乘数交替方向方法(ADMM)算法已广泛用于分布式机器学习任务。但是,它受到若干限制,例如,相对低的收敛速度和昂贵的时间成本。为此,本文提出了一种新的方法,即基于组的ADMM(GADMM),用于分布式线性分类。特别是,为了加快收敛速度​​并提高全局共识性,GADMM中首先使用了一个组层将所有从属节点划分为几个组。然后,所有局部变量(来自从节点)都收集在组层中以生成不同的组变量。最后,通过使用加权平均法,协调组变量以更新全局变量(来自主节点),直到解决全局问题为止。根据理论分析,我们发现:1)GADMM可以以一定的速率进行数学收敛,其中,外部迭代次数为2)通过使用分组方法,与未分组的分布式ADMM框架相比,GADMM可以提高收敛速度方法。此外,我们在四个公开的LIBSVM数据集上系统地评估了GADMM。相对于disADMM和采用乘数交替方向方法-ADMM的随机双坐标上升,对于分布式分类,GADMM能够减少外部迭代的次数,从而加快收敛速度​​并提高全局一致性。特别是,已通过实验进行了统计显着性检验,结果验证了与大规模数据集(例如Webspam)上的disADMM相比,GADMM可以节省多达30%的总时间成本(准确性损失小于0.6%)。和epsilon。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号