首页> 外国专利> FAST DEEP NEURAL NETWORK FEATURE TRANSFORMATION VIA OPTIMIZED MEMORY BANDWIDTH UTILIZATION

FAST DEEP NEURAL NETWORK FEATURE TRANSFORMATION VIA OPTIMIZED MEMORY BANDWIDTH UTILIZATION

机译:通过优化记忆带宽的使用来进行快速深层神经网络特征转换

摘要

Deep Neural Networks (DNNs) with many hidden layers and many units per layer are very flexible models with a very large number of parameters. As such, DNNs are challenging to optimize. To achieve real-time computation, embodiments disclosed herein enable fast DNN feature transformation via optimized memory bandwidth utilization. To optimize memory bandwidth utilization, a rate of accessing memory may be reduced based on a batch setting. A memory, corresponding to a selected given output neuron of a current layer of the DNN, may be updated with an incremental output value computed for the selected given output neuron as a function of input values of a selected few non-zero input neurons of a previous layer of the DNN in combination with weights between the selected few non-zero input neurons and the selected given output neuron, wherein a number of the selected few corresponds to the batch setting.
机译:具有许多隐藏层和每层许多单元的深度神经网络(DNN)是具有大量参数的非常灵活的模型。因此,DNN面临优化挑战。为了实现实时计算,本文公开的实施例能够通过优化的存储器带宽利用来实现快速的DNN特征变换。为了优化内存带宽利用率,可以基于批处理设置降低访问内存的速率。可以使用根据DNN的当前层的选定的给定输出神经元的选定的少数非零输入神经元的输入值的函数为该选定的给定输出神经元计算的增量输出值来更新存储器。结合选定的少数几个非零输入神经元和选定的给定输出神经元之间的权重来进行DNN的前一层,其中选定的少数几个的数量对应于批次设置。

著录项

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号