首页> 外文会议>IEEE International Solid- State Circuits Conference >24.5 A Twin-8T SRAM Computation-In-Memory Macro for Multiple-Bit CNN-Based Machine Learning
【24h】

24.5 A Twin-8T SRAM Computation-In-Memory Macro for Multiple-Bit CNN-Based Machine Learning

机译:24.5基于多位CNN的机器学习的双8T SRAM计算内存宏

获取原文

摘要

Computation-in-memory (CIM) is a promising avenue to improve the energy efficiency of multiply-and-accumulate (MAC) operations in AI chips. Multi-bit CNNs are required for high-inference accuracy in many applications [1-5]. There are challenges and tradeoffs for SRAM-based CIM: (1) tradeoffs between signal margin, cell stability and area overhead; (2) the high-weighted bit process variation dominates the end-result error rate; (3) trade-off between input bandwidth, speed and area. Previous SRAM CIM macros were limited to binary MAC operations for fully connected networks [1], or they used CIM for multiplication [2] or weight-combination operations [3] with additional large-area near-memory computing (NMC) logic for summation or MAC operations.
机译:计算内存(CIM)是一个有前途的途径,以提高AI芯片中的乘法积累(MAC)操作的能量效率。许多应用中的高推理准确性需要多位CNN [1-5]。基于SRAM的CIM存在挑战和权衡:(1)信号边距,电池稳定性和面积开销之间的权衡; (2)高加权比特过程变化主导最终结果错误率; (3)输入带宽,速度和区域之间的权衡。以前的SRAM CIM宏仅限于完全连接的网络[1]的二进制MAC操作,或者使用CIM进行乘法[2]或权重 - 组合操作[3],其中包含额外的大区域近记忆计算(NMC)逻辑进行求和或Mac操作。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号