首页> 外国专利> Robust Gradient weight compression schemes for deep learning applications

Robust Gradient weight compression schemes for deep learning applications

机译:深度学习应用的稳健梯度权重压缩方案

摘要

Embodiments of the present invention provide a computer-implemented method for adaptive residual gradient compression for training of a deep learning neural network (DNN). The method includes obtaining, by a first learner, a current gradient vector for a neural network layer of the DNN, in which the current gradient vector includes gradient weights of parameters of the neural network layer that are calculated from a mini-batch of training data. A current residue vector is generated that includes residual gradient weights for the mini-batch. A compressed current residue vector is generated based on dividing the residual gradient weights of the current residue vector into a plurality of bins of a uniform size and quantizing a subset of the residual gradient weights of one or more bins of the plurality of bins. The compressed current residue vector is then transmitted to a second learner of the plurality of learners or to a parameter server.
机译:本发明的实施例提供了用于训练深度学习神经网络(DNN)的自适应残差梯度压缩的计算机实现的方法。该方法包括由第一学习者获得用于DNN的神经网络层的当前梯度向量,其中当前梯度向量包括从训练数据的小批量中计算出的神经网络层的参数的梯度权重。 。生成当前残差矢量,其中包括微型批次的残差梯度权重。基于将当前残差矢量的残余梯度权重划分为多个均一大小的仓并量化多个仓中一个或多个仓的残差梯度权重的子集,来生成压缩的当前残差矢量。然后将压缩的电流残差矢量传输到多个学习器中的第二学习器或参数服务器。

著录项

  • 公开/公告号GB202009717D0

    专利类型

  • 公开/公告日2020-08-12

    原文格式PDF

  • 申请/专利权人 INTERNATIONAL BUSINESS MACHINES CORPORATION;

    申请/专利号GB20200009717

  • 发明设计人

    申请日2018-11-30

  • 分类号

  • 国家 GB

  • 入库时间 2022-08-21 10:59:57

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号