【24h】

LUTNet: Rethinking Inference in FPGA Soft Logic

机译:LUTNet:重新思考FPGA软逻辑中的推理

获取原文

摘要

Research has shown that deep neural networks contain significant redundancy, and that high classification accuracies can be achieved even when weights and activations are quantised down to binary values. Network binarisation on FPGAs greatly increases area efficiency by replacing resource-hungry multipliers with lightweight XNOR gates. However, an FPGA's fundamental building block, the K-LUT, is capable of implementing far more than an XNOR: it can perform any K-input Boolean operation. Inspired by this observation, we propose LUTNet, an end-to-end hardware-software framework for the construction of area-efficient FPGA-based neural network accelerators using the native LUTs as inference operators. We demonstrate that the exploitation of LUT flexibility allows for far heavier pruning than possible in prior works, resulting in significant area savings while achieving comparable accuracy. Against the state-of-the-art binarised neural network implementation, we achieve twice the area efficiency for several standard network models when inferencing popular datasets. We also demonstrate that even greater energy efficiency improvements are obtainable.
机译:研究表明,深度神经网络包含大量冗余,并且即使将权重和激活量化为二进制值,也可以实现较高的分类精度。 FPGA上的网络二值化通过用轻量级的XNOR门代替耗费资源的乘法器,大大提高了区域效率。但是,FPGA的基本构件K-LUT能够实现的功能远远超过XNOR:它可以执行任何K输入布尔运算。受此观察的启发,我们提出了LUTNet,这是一种端到端的硬件软件框架,用于使用本地LUT作为推理运算符来构建基于面积的基于FPGA的神经网络加速器。我们证明,利用LUT的灵活性可以比以前的工作进行更大量的修剪,从而节省大量面积,同时达到可比的精度。相对于最新的二值化神经网络实现,当推断流行的数据集时,对于几种标准网络模型,我们实现了两倍的面积效率。我们还证明,可以获得更大的能源效率改善。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号