首页> 外文期刊>Nonlinear Theory and Its Applications >A study on a low power optimization algorithm for an edge-AI device
【24h】

A study on a low power optimization algorithm for an edge-AI device

机译:边缘人工智能设备的低功耗优化算法研究

获取原文
获取外文期刊封面目录资料

摘要

Although research on the inference phase of edge artificial intelligence (AI) has made considerable improvement, the required training phase remains an unsolved problem. Neural network (NN) processing has two phases: inference and training. In the training phase, a NN incurs high calculation cost. The number of bits (bitwidth) in the training phase is several orders of magnitude larger than that in the inference phase. Training algorithms, optimized to software, are not appropriate for training hardware-oriented NNs. Therefore, we propose a new training algorithm for edge AI: backpropagation (BP) using a ternarized gradient. This ternarized backpropagation (TBP) provides a balance between calculation cost and performance. Empirical results demonstrate that in a two-class classification task, TBP works well in practice and compares favorably with 16-bit BP (Fixed-BP).
机译:尽管对边缘人工智能(AI)推理阶段的研究已取得了很大的进步,但所需的训练阶段仍未解决。神经网络(NN)处理分为两个阶段:推理和训练。在训练阶段,NN会产生高昂的计算成本。训练阶段的位数(位宽)比推理阶段的位数大几个数量级。针对软件优化的训练算法不适用于训练面向硬件的NN。因此,我们提出了一种新的边缘AI训练算法:使用分层梯度的反向传播(BP)。这种分层的反向传播(TBP)在计算成本和性能之间取得了平衡。实验结果表明,在两类分类任务中,TBP在实践中效果很好,并且与16位BP(Fixed-BP)相比具有优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号