首页> 外文会议>IEEE International Solid- State Circuits Conference >24.5 A Twin-8T SRAM Computation-In-Memory Macro for Multiple-Bit CNN-Based Machine Learning
【24h】

24.5 A Twin-8T SRAM Computation-In-Memory Macro for Multiple-Bit CNN-Based Machine Learning

机译:24.5用于基于多位CNN的机器学习的Twin-8T SRAM内存中计算宏

获取原文

摘要

Computation-in-memory (CIM) is a promising avenue to improve the energy efficiency of multiply-and-accumulate (MAC) operations in AI chips. Multi-bit CNNs are required for high-inference accuracy in many applications [1-5]. There are challenges and tradeoffs for SRAM-based CIM: (1) tradeoffs between signal margin, cell stability and area overhead; (2) the high-weighted bit process variation dominates the end-result error rate; (3) trade-off between input bandwidth, speed and area. Previous SRAM CIM macros were limited to binary MAC operations for fully connected networks [1], or they used CIM for multiplication [2] or weight-combination operations [3] with additional large-area near-memory computing (NMC) logic for summation or MAC operations.
机译:内存计算(CIM)是一种有前途的途径,可以提高AI芯片中乘加(MAC)运算的能效。在许多应用中,为了获得较高的推理精度,需要使用多位CNN [1-5]。基于SRAM的CIM面临挑战和折衷:(1)在信号余量,单元稳定性和面积开销之间进行折衷; (2)高加权位过程的变化控制着最终结果的错误率; (3)在输入带宽,速度和面积之间进行权衡。先前的SRAM CIM宏仅限于用于全连接网络的二进制MAC操作[1],或者它们将CIM用于乘法[2]或权重组合操作[3],以及用于求和的其他大面积近内存计算(NMC)逻辑或MAC操作。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号