首页> 外文期刊>IEEE Electron Device Letters >Technological Design of 3D NAND-Based Compute-in-Memory Architecture for GB-Scale Deep Neural Network
【24h】

Technological Design of 3D NAND-Based Compute-in-Memory Architecture for GB-Scale Deep Neural Network

机译:基于3D NAND基础计算内存架构的技术设计,用于GB级深神经网络

获取原文
获取原文并翻译 | 示例
       

摘要

In this work, a heterogeneous integration strategy of 3DNAND basedcompute- in-memory (CIM) architecture is proposed for large-scale deep neural networks (DNNs). While most of the reported CIM architectures today have focused on the image classification models with MB-level parameters, we aim at huge language translation models with GB-scale parameters. Our 3D NAND CIM architecture design exploits two fabrication techniques, wafer bonding scheme and CMOS under array (CUA), to integrate CMOS circuits, 3D NAND cells, and high voltage (HV) transistors at different tiers without thermal budget issue during the fabrication process. The bonding pads between two wafers are designed to transfer the input and output vectors while ensuring similar to 1 mu m pitch that is feasible by hybrid bonding. The chip size of the 512 Gb 128-layer 3D NANDCIM architecture is estimatedto be 166mm(2) with 7 nm FinFET logic transistors. Using the physical and electrical parameters of standard 3D NAND cells, the 1.15-19.01 tera operations per second per watt (TOPS/W) of energy efficiency is achieved.
机译:在这项工作中,针对大型深神经网络(DNN)提出了基于存储器的3DNAND的异构集成策略。虽然今天的大多数CIM架构都集中在具有MB级参数的图像分类模型中,但我们的目标是具有GB级参数的巨大语言翻译模型。我们的3D NAND CIM架构设计利用了两个制造技术,晶片键合方案和CMOS下的阵列(CUA),在制造过程中集成在不同层的不同层的CMOS电路,3D NAND电池和高电压(HV)晶体管。两个晶片之间的粘合焊盘被设计成转移输入和输出矢量,同时确保与混合键合可行的1μm间距。 512 GB 128层3D Nandcim架构的芯片尺寸估计为166mm(2),具有7个NM FinFET逻辑晶体管。使用标准3D NAND细胞的物理和电气参数,实现了每秒(顶部/瓦)的1.15-19.01 TERA操作的能量效率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号