首页> 外文期刊>IEEE transactions on very large scale integration (VLSI) systems >TiM-DNN: Ternary In-Memory Accelerator for Deep Neural Networks
【24h】

TiM-DNN: Ternary In-Memory Accelerator for Deep Neural Networks

机译:TIM-DNN:深神经网络的三元内存加速器

获取原文
获取原文并翻译 | 示例

摘要

The use of lower precision has emerged as a popular technique to optimize the compute and storage requirements of complex deep neural networks (DNNs). In the quest for lower precision, recent studies have shown that ternary DNNs (which represent weights and activations by signed ternary values) represent a promising sweet spot, achieving accuracy close to full-precision networks on complex tasks. We propose TiM-DNN, a programmable in-memory accelerator that is specifically designed to execute ternary DNNs. TiM-DNN supports various ternary representations including unweighted {-1, 0, 1}, symmetric weighted {-a, 0, a}, and asymmetric weighted {- a, 0, b} ternary systems. The building blocks of TiM-DNN are TiM tiles-specialized memory arrays that perform massively parallel signed ternary vector-matrix multiplications with a single access. TiM tiles are in turn composed of ternary processing cells (TPCs), bit-cells that function as both ternary storage units and signed ternary multiplication units. We evaluate an implementation of TiM-DNN in 32-nm technology using an architectural simulator calibrated with SPICE simulations and RTL synthesis. We evaluate TiM-DNN across a suite of state-of-the-art DNN benchmarks including both deep convolutional and recurrent neural networks. A 32-tile instance of TiM-DNN achieves a peak performance of 114 TOPs/s, consumes 0.9-W power, and occupies 1.96 mm(2) chip area, representing a 300x and 388x improvement in TOPS/W and TOPS/mm(2), respectively, compared to an NVIDIA Tesla V100 GPU. In comparison to specialized DNN accelerators, TiM-DNN achieves 55x-240x and 160x-291x improvement in TOPS/W and TOPS/mm(2), respectively. Finally, when compared to a well-optimized near-memory accelerator for ternary DNNs, TiM-DNN demonstrates 3.9x-4.7x improvement in system-level energy and 3.2x-4.2x speedup, underscoring the potential of in-memory computing for ternary DNNs.
机译:使用较低的精度已经出现为流行的技术,以优化复杂深神经网络(DNN)的计算和存储要求。在寻求较低的精度中,最近的研究表明,三元DNN(代表签名三元值的权重和激活)代表了一个有前途的甜点,实现了复杂任务上的全精密网络的准确性。我们提出Tim-DNN,一个可编程内存加速器,专门用于执行三元DNN。 Tim-DNN支持各种三元表示,包括未加权{-1,0,1},对称加权{-A,0,a}和非对称加权{ - a,0,b}三元系统。 Tim-DNN的构建块是Tim Tiles专用存储器阵列,其执行具有单个访问的大规模并行符号的三元矢量矩号乘法。 Tim瓦片又由三元处理单元(TPC),计位单元,其作为三元存储单元和签名的三元乘法单位函数。我们使用Spice Simulations和RTL合成的架构模拟器评估32nm技术中TIM-DNN的实现。我们在一套最先进的DNN基准测试中评估TIM-DNN,包括深度卷积和经常性神经网络。 Tim-DNN的32个瓦片实例实现了114个顶部/秒的峰值性能,消耗0.9-W电源,占据1.96毫米(2)芯片区域,表示顶部/ W和顶部/ mm的300x和388倍的改进( 2)分别与NVIDIA Tesla V100 GPU相比。与专用DNN加速器相比,TIM-DNN分别在顶/ W和顶部/ mm(2)中实现了55倍-240x和160x-291x的改进。最后,与用于三元DNN的良好优化的近记忆加速器相比,Tim-DNN在系统级能量和3.2x-4.2x加速度上展示了3.9x-4.7x的提高,强调了三元的内存计算潜力DNN。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号