首页> 外文会议>Asia and South Pacific Design Automation Conference >Supporting compressed-sparse activations and weights on SIMD-like accelerator for sparse convolutional neural networks
【24h】

Supporting compressed-sparse activations and weights on SIMD-like accelerator for sparse convolutional neural networks

机译:在稀疏卷积神经网络的类SIMD加速器上支持压缩稀疏激活和权重

获取原文
获取外文期刊封面目录资料

摘要

Sparsity is widely observed in convolutional neural networks by zeroing a large portion of both activations and weights without impairing the result. By keeping the data in a compressed-sparse format, the energy consumption could be considerably cut down due to less memory traffic. However, the wide SIMD-like MAC engine adopted in many CNN accelerators can not support the compressed input due to the data misalignment. In this work, a novel Dual Indexing Module (DIM) is proposed to efficiently handle the alignment issue where activations and weights are both kept in compressed-sparse format. The DIM is implemented in a representative SIMD-like CNN accelerator, and able to exploit both compressed-sparse activations and weights. The synthesis results with 40nm technology have shown that DIM can enhance up to 46% of energy consumption and 55.4% Energy-Delay-Product (EDP).
机译:通过在不损害结果的情况下将大部分激活和权重都清零,在卷积神经网络中广泛观察到稀疏性。通过将数据保持为稀疏压缩格式,可以减少内存流量,从而大大降低能耗。但是,由于数据未对齐,许多CNN加速器中采用的类似于SIMD的宽MAC引擎无法支持压缩输入。在这项工作中,提出了一种新颖的双索引模块(DIM)以有效处理对齐问题,其中激活和权重都保持为压缩稀疏格式。 DIM在具有代表性的类似SIMD的CNN加速器中实现,并且能够利用压缩稀疏激活和权重。 40nm技术的合成结果表明,DIM可以提高多达46%的能耗和55.4%的能量延迟乘积(EDP)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号