首页> 外文会议>European conference on machine learning and principles and practice of knowledge discovery in databases >Distributed Learning of Non-convex Linear Models with One Round of Communication
【24h】

Distributed Learning of Non-convex Linear Models with One Round of Communication

机译:一轮交流的非凸线性模型的分布式学习

获取原文

摘要

We present the optimal weighted average (OWA) distributed learning algorithm for linear models. OWA achieves statistically optimal learning rates, uses only one round of communication, works on non-convex problems, and supports a fast cross validation procedure. The OWA algorithm first trains local models on each of the compute nodes; then a master machine merges the models using a second round of optimization. This second optimization uses only a small fraction of the data, and so has negligible computational cost. Compared with similar distributed estimators that merge locally trained models, OWA either has stronger statistical guarantees, is applicable to more models, or has a more computationally efficient merging procedure.
机译:我们提出了线性模型的最佳加权平均(OWA)分布式学习算法。 OWA达到了统计上的最佳学习率,仅使用了一轮交流,处理非凸问题,并支持快速的交叉验证程序。 OWA算法首先在每个计算节点上训练局部模型;然后,然后,一台主计算机使用第二轮优化来合并模型。第二个优化仅使用一小部分数据,因此计算成本可忽略不计。与合并本地训练模型的类似分布式估计器相比,OWA具有更强大的统计保证,可应用于更多模型或计算效率更高的合并过程。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号