首页> 外国专利> MULTI-LAYER NEURAL NETWORK PROCESSING BY A NEURAL NETWORK ACCELERATOR USING HOST COMMUNICATED MERGED WEIGHTS AND A PACKAGE OF PER-LAYER INSTRUCTIONS

MULTI-LAYER NEURAL NETWORK PROCESSING BY A NEURAL NETWORK ACCELERATOR USING HOST COMMUNICATED MERGED WEIGHTS AND A PACKAGE OF PER-LAYER INSTRUCTIONS

机译:主机通信混合权重和分层指令打包的神经网络加速器对多层神经网络的处理

摘要

In the disclosed methods and systems for processing in a neural network system, a host computer system writes a plurality of weight matrices associated with a plurality of layers of a neural network to a memory shared with a neural network accelerator. The host computer system further assembles a plurality of per-layer instructions into an instruction package. Each per-layer instruction specifies processing of a respective layer of the plurality of layers of the neural network, and respective offsets of weight matrices in a shared memory. The host computer system writes input data and the instruction package to the shared memory. The neural network accelerator reads the instruction package from the shared memory and processes the plurality of per-layer instructions of the instruction package.
机译:在所公开的用于在神经网络系统中进行处理的方法和系统中,主计算机系统将与神经网络的多个层相关联的多个权重矩阵写入与神经网络加速器共享的存储器中。主机系统还将多个每层指令组装成指令包。每个每层指令指定对神经网络的多个层中的各个层的处理以及共享存储器中权重矩阵的各个偏移。主机系统将输入数据和指令包写入共享存储器。神经网络加速器从共享存储器中读取指令包并处理该指令包的多个每层指令。

著录项

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号