首页> 外文会议>IEEE Computer Society Annual Symposium on VLSI >Accelerating Deep Neural Networks in Processing-in-Memory Platforms: Analog or Digital Approach?
【24h】

Accelerating Deep Neural Networks in Processing-in-Memory Platforms: Analog or Digital Approach?

机译:加速内存处理平台中的深层神经网络:模拟还是数字方法?

获取原文

摘要

Nowadays, research topics on AI accelerator designs have attracted great interest, where accelerating Deep Neural Network (DNN) using Processing-in-Memory (PIM) platforms is an actively-explored direction with great potential. PIM platforms, which simultaneously aims to address power- and memory-wall bottlenecks, have shown orders of performance enhancement in comparison to the conventional computing platforms with Von-Neumann architecture. As one direction of accelerating DNN in PIM, resistive memory array (aka. crossbar) has drawn great research interest owing to its analog current-mode weighted summation operation which intrinsically matches the dominant Multiplication-and-Accumulation (MAC) operation in DNN, making it one of the most promising candidates. An alternative direction for PIM-based DNN acceleration is through bulk bit-wise logic operations directly performed on the content in digital memories. Thanks to the high fault-tolerant characteristic of DNN, the latest algorithmic progression successfully quantized DNN parameters to low bit-width representations, while maintaining competitive accuracy levels. Such DNN quantization techniques essentially convert MAC operation to much simpler addition/subtraction or comparison operations, which can be performed by bulk bit-wise logic operations in a highly parallel fashion. In this paper, we build a comprehensive evaluation framework to quantitatively compare and analyze aforementioned PIM based analog and digital approaches for DNN acceleration.
机译:如今,有关AI加速器设计的研究主题引起了极大的兴趣,其中使用内存中处理(PIM)平台加速深层神经网络(DNN)是一个积极探索的,具有巨大潜力的方向。与具有Von-Neumann架构的传统计算平台相比,旨在同时解决功耗和内存壁瓶颈的PIM平台已显示出性能提升的顺序。作为加速PIM中DNN的一个方向,电阻存储器阵列(又称交叉开关)由于其模拟电流模式加权求和运算与DNN中占主导地位的乘法和累加(MAC)运算本质上相匹配而引起了极大的研究兴趣,它是最有前途的候选人之一。基于PIM的DNN加速的另一方向是通过直接对数字存储器中的内容执行批量按位逻辑运算。由于DNN的高容错特性,最新的算法进展成功地将DNN参数量化为低位宽度表示,同时保持了竞争的准确性水平。这样的DNN量化技术实质上将MAC操作转换为更简单的加/减或比较操作,可以通过批量位逻辑运算以高度并行的方式执行该操作。在本文中,我们建立了一个全面的评估框架,以定量比较和分析上述基于PIM的DNN加速模拟和数字方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号