首页> 外文会议>IEEE Microelectronics Design and Test Symposium >Opportunities and Limitations of in-Memory Multiply-and-Accumulate Arrays
【24h】

Opportunities and Limitations of in-Memory Multiply-and-Accumulate Arrays

机译:内存乘法和累积阵列的机遇和限制

获取原文

摘要

In-memory computing is a promising solution to solve the memory bottleneck problem which becomes increasingly unfavorable in modern machine learning systems. In this paper, we introduce an architecture of random access memory (RAM) incorporating deep learning inference abilities. Due to the digital nature of this design, the architecture can be applied to a variety of commercially available volatile and non-volatile memory technologies. We also introduce a multi-chip architecture to accommodate for varying network sizes and to maximize parallel computing ability. Moreover, we discuss the opportunities and limitations of in-memory computing as future neural networks scale, in terms of power, latency and performance. To do so, we applied this architecture to various prevalent neural networks, e.g. Artificial Neural Network (ANN), Convolutional Neural Network (CNN) and Transformer Network and compared the results.
机译:内存计算是解决内存瓶颈问题的有希望的解决方案,这在现代机器学习系统中变得越来越不利。 在本文中,我们介绍了包含深入学习推论能力的随机存取存储器(RAM)的架构。 由于这种设计的数字性质,该架构可以应用于各种市售的挥发性和非易失性存储器技术。 我们还引入多芯片架构,以适应不同的网络尺寸,并最大限度地提高并行计算能力。 此外,我们在电力,延迟和性能方面讨论了内存计算的机会和局限作为未来的神经网络量表。 为此,我们将这种架构应用于各种普遍的神经网络,例如普遍的神经网络。 人工神经网络(ANN),卷积神经网络(CNN)和变压器网络并比较结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号