首页> 外文会议>China Semiconductor Technology International Conference >A Low-Bit Quantized and HLS-Based Neural Network FPGA Accelerator for Object Detection
【24h】

A Low-Bit Quantized and HLS-Based Neural Network FPGA Accelerator for Object Detection

机译:用于对象检测的低位量化和基于HLS的神经网络FPGA加速器

获取原文

摘要

In this paper, a HLS-based convolutional neural network (CNN) accelerator is designed for FPGA and channel-wise low-bit quantization is applied to YOLOv3- Tiny, whose weights are quantized to 2-bit while activations are quantized to 8-bit. The quantization range is learnable in training to prevent severe accuracy loss. The accelerator uses sliding window technique to improve data reusability and efficient process element (PE) is designed to utilize low-bit calculation. This design makes full use of DSP and LUT resources and exploits optimal parallelism on embedded FPGA. The performance of our design can reach 90.6 GOP/s on PYNQ-Z2 at 150 MHz, which outperforms other accelerators implemented on the same platform in terms of peak performance and power efficiency.
机译:本文为基于HLS的卷积神经网络(CNN)加速器设计用于FPGA,并且频道明智的低比特量化应用于Yolov3-Tiny,其权重量化为2位,而激活量化为8位。 。 量化范围在训练中是可知的,以防止严重的精度损失。 加速器使用滑动窗技术来改善数据可重用性和有效的处理元件(PE)旨在利用低比特计算。 该设计充分利用DSP和LUT资源,并在嵌入式FPGA上利用最佳并行性。 我们设计的性能可以在150 MHz的Pynq-Z2上达到90.6 GOP / s,这在峰值性能和功率效率方面优于同一平台上实现的其他加速器。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号