首页>
外国专利>
ORDERING COMPUTATIONS OF A MACHINE LEARNING NETWORK IN A MACHINE LEARNING ACCELERATOR FOR EFFICIENT MEMORY USAGE
ORDERING COMPUTATIONS OF A MACHINE LEARNING NETWORK IN A MACHINE LEARNING ACCELERATOR FOR EFFICIENT MEMORY USAGE
展开▼
机译:订购机器学习网络中的机器学习网络的计算,以获得高效的内存用法
展开▼
页面导航
摘要
著录项
相似文献
摘要
A compiler efficiently manages memory usage in the machine learning accelerator by intelligently ordering computations of a machine learning network. The compiler identifies a set of partial networks of the machine learning network representing portions of the machine learning network across multiple layers on which an output or set of outputs are dependent. Because any given output may depend on only a limited subset of intermediate outputs from the prior layers, each partial network may include only a small fraction of the intermediate outputs from each layer. Instead of implementing the MLN by computing one layer at a time, the compiler schedules instructions to sequentially implement partial networks. As each layer of a partial network is completed, the intermediate outputs can be released from memory. The described technique enables intermediate outputs to be directly streamed between processing elements of the machine learning accelerator without requiring large transfers to and from external memory.
展开▼