首页> 外文会议>2018 IEEE Data Science Workshop >AN EXPONENTIALLY CONVERGENT ALGORITHM FOR LEARNING UNDER DISTRIBUTED FEATURES
【24h】

AN EXPONENTIALLY CONVERGENT ALGORITHM FOR LEARNING UNDER DISTRIBUTED FEATURES

机译:分布式特征下的指数收敛算法

获取原文
获取原文并翻译 | 示例

摘要

This work studies the problem of learning under both large data and large feature space scenarios. The feature information is assumed to be spread across agents in a network, where each agent observes some of the features. Through local cooperation, the agents are supposed to interact with each other to solve the inference problem and converge towards the global minimizer of the empirical risk. We study this problem exclusively in the primal domain, and propose new and effective distributed solutions with guaranteed convergence to the minimizer. This is achieved by combining a dynamic diffusion construction, a pipeline strategy, and variance-reduced techniques. Simulation results illustrate the conclusions.
机译:这项工作研究了在大数据和大特征空间情况下的学习问题。假定功能信息分布在网络中的各个代理之间,每个代理都在其中观察某些功能。通过本地合作,代理商之间可以相互交流,以解决推理问题,并朝着经验风险的整体最小化方向发展。我们仅在原始域中研究此问题,并提出了新的有效分布式解决方案,并保证了与最小化器的收敛。这是通过结合动态扩散构造,流水线策略和减少方差的技术来实现的。仿真结果说明了结论。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号