首页> 外国专利> Parallelization of Online Learning Algorithms

Parallelization of Online Learning Algorithms

机译:在线学习算法的并行化

摘要

Methods, systems, and media are provided for a dynamic batch strategy utilized in parallelization of online learning algorithms. The dynamic batch strategy provides a merge function on the basis of a threshold level difference between the original model state and an updated model state, rather than according to a constant or pre-determined batch size. The merging includes reading a batch of incoming streaming data, retrieving any missing model beliefs from partner processors, and training on the batch of incoming streaming data. The steps of reading, retrieving, and training are repeated until the measured difference in states exceeds a set threshold level. The measured differences which exceed the threshold level are merged for each of the plurality of processors according to attributes. The merged differences which exceed the threshold level are combined with the original partial model states to obtain an updated global model state.
机译:提供了用于在线学习算法的并行化中的动态批处理策略的方法,系统和媒体。动态批处理策略基于原始模型状态和更新的模型状态之间的阈值级别差异,而不是根据恒定或预定的批处理大小,提供合并功能。合并包括读取一批传入的流数据,从合作伙伴处理器检索任何缺失的模型置信度,以及对一批传入的流数据进行训练。重复读取,检索和训练的步骤,直到测得的状态差异超过设置的阈值水平为止。对于多个处理器中的每一个,根据属性合并超过阈值水平的所测量的差。将超过阈值水平的合并差异与原始局部模型状态组合以获得更新的全局模型状态。

著录项

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号