首页> 外文会议>Asia and South Pacific Design Automation Conference >Exploring energy and accuracy tradeoff in structure simplification of trained deep neural networks
【24h】

Exploring energy and accuracy tradeoff in structure simplification of trained deep neural networks

机译:在训练的深度神经网络的结构简化中探索能量和精度的权衡

获取原文

摘要

This paper presents a structure simplification procedure that allows efficient energy and accuracy tradeoffs in implementation of trained deep neural networks (DNNs). This structure simplification procedure identifies and eliminates redundant neurons in any layer of the DNN based on the trained weights connected from these neurons. This procedure may be applied to all layers of a DNN. For each layer different configurations with Pareto-optimal accuracy and energy consumption are realized. Our work is the first to use energy-accuracy trade-offs to guide optimal structure realization of trained DNNs. After redundant neurons are discarded, the weights of remaining neurons will be updated using matrix multiplication without retraining. Yet, retraining may still be applied if desired to further fine tune the performance. In our experiments, we show energy-accuracy tradeoff provides clear guidance to achieve efficient realization of trained DNNs. We also observe significant implementation cost reductions with up to 33X in energy and 12X in memory while the performance (accuracy) loss is negligible.
机译:本文提出了一种结构简化程序,该结构允许在训练深层神经网络(DNN)的实现中进行高效的能量和精度折衷。这种结构简化过程会根据与这些神经元连接的训练后的权重,识别并消除DNN任意层中的多余神经元。该过程可以应用于DNN的所有层。对于每一层,实现了具有帕累托最优精度和能量消耗的不同配置。我们的工作是第一个使用能量准确性折衷方法来指导受过训练的DNN的最佳结构实现的工作。丢弃多余的神经元后,将使用矩阵乘法更新剩余神经元的权重,而无需重新训练。但是,如果需要进一步优化性能,仍可以应用再培训。在我们的实验中,我们证明了能量准确性的权衡为实现训练有素的DNN的有效实现提供了明确的指导。我们还观察到实现成本的显着降低,能耗降低了33倍,内存降低了12倍,而性能(准确性)损失却可以忽略不计。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号