...
【24h】

Quantization-based Optimization of CNN Inference

机译:Quantization-based Optimization of CNN Inference

获取原文
获取原文并翻译 | 示例
           

摘要

With specifically designed hardware, FPGA is a promising candidate for neural network inference acceleration. The main challenge FPGA-based accelerator designs are faced is the deficiency of on-chip resources. We consider using multi-FPGA to conquer this problem. However, even for multi-FPGA, insufficient resources and communication delays are still non-negligible problems. In this paper, we use the quantization method based on LQ-NET proposed by the Microsoft group to reduce resource usage and communication traffic. At the same time, the tradeoff of the accuracy can be achieved by increasing the bit width. The synthesis results of the first two layers of Alexnet indicate that both the BRAM usage and the performance are improved.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号