...
首页> 外文期刊>IEEE Micro >Circuits and Architectures for In-Memory Computing-Based Machine Learning Accelerators
【24h】

Circuits and Architectures for In-Memory Computing-Based Machine Learning Accelerators

机译:基于内存计算的机器学习加速器的电路和架构

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Machine learning applications, especially deep neural networks (DNNs) have seen ubiquitous use in computer vision, speech recognition, and robotics. However, the growing complexity of DNN models have necessitated efficient hardware implementations. The key compute primitives of DNNs are matrix vector multiplications, which lead to significant data movement between memory and processing units in today's von Neumann systems. A promising alternative would be colocating memory and processing elements, which can be further extended to performing computations inside the memory itself. We believe in-memory computing is a propitious candidate for future DNN accelerators, since it mitigates the memory wall bottleneck. In this article, we discuss various in-memory computing primitives in both CMOS and emerging nonvolatile memory (NVM) technologies. Subsequently, we describe how such primitives can be incorporated in standalone machine learning accelerator architectures. Finally, we analyze the challenges associated with designing such in-memory computing accelerators and explore future opportunities.
机译:机器学习应用,尤其是深度神经网络(DNN)在计算机视觉,语音识别和机器人中都有普遍存在。然而,DNN模型的增长复杂性具有有效的硬件实现。 DNN的关键计算原语是矩阵向量乘法,这导致当今von Neumann系统中的存储器和处理单元之间的显着数据移动。有希望的替代方案将是耦合的存储器和处理元件,其可以进一步扩展到在存储器本身内执行计算。我们相信内存计算是未来DNN加速器的一个有利候选者,因为它会减轻记忆墙瓶颈。在本文中,我们在CMOS和新兴的非易失性存储器(NVM)技术中讨论各种内存计算原语。随后,我们描述了这些基元可以包含在独立机器学习加速器架构中。最后,我们分析了与设计内存计算的加速器相关的挑战,并探索未来的机会。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号