首页> 外文会议>IEEE Data Science Workshop >Coke: Communication-Censored Kernel Learning Via Random Features
【24h】

Coke: Communication-Censored Kernel Learning Via Random Features

机译:可乐:通过随机特征进行通信审查的内核学习

获取原文

摘要

Distributed kernel-based methods are attractive in nonlinear learning tasks where either a dataset is too large to be processed on a single machine or the data are only locally available to geographically-located sites. For the first case, we propose to split the large dataset into multiple mini-batches and distribute them to distinct sites for parallel learning through the alternating direction method of multipliers (ADMM). For the second case, we develop a decentralized ADMM so that each site can solve the learning task collaboratively through one-hop communications. To circumvent the curse of dimensionality in kernel-based methods, we leverage the random feature approximation to map the large-volume data into a smaller feature space. This also results in a common set of decision parameters that can be exchanged among sites. Motivated by the need to conserve energy and reduce communication overheads, we apply a censoring strategy to evaluate the updated parameter at each site and decide if this update is worth transmitting. The proposed COmmunication-censored KErnel learning (COKE) algorithms are corroborated to be communication-efficient and learning-effective by simulations on both synthetic and real datasets.
机译:分布式内核的方法在非线性学习任务中是有吸引力的,其中数据集太大而无法在单个机器上处理,或者数据仅在地理位置位于地区提供。对于第一种情况,我们建议将大型数据集分成多个小型批次,并通过乘法器(ADMM)的交替方向方法并行学习,将它们分发给不同的站点。对于第二种情况,我们开发一个分散的ADMM,以便每个网站可以通过一次跳通通信协同地解决学习任务。为了规避基于内核的方法的维度诅咒,我们利用随机特征近似来将大容量数据映射到较小的特征空间。这也导致可以在站点之间交换的常见决策参数集。由于需要节省能量并减少通信开销的动机,我们应用了审查策略来评估每个站点的更新参数并决定此更新是否值得传输。通过仿真和实际数据集的仿真,建议的通信审查的内核学习(可焦炭)算法被证实为通信 - 高效和学习效益。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号