首页> 外文会议> >An Efficient Racetrack Memory-Based Processing-in-Memory Architecture for Convolutional Neural Networks
【24h】

An Efficient Racetrack Memory-Based Processing-in-Memory Architecture for Convolutional Neural Networks

机译:卷积神经网络的一种高效的基于赛道内存的内存中处理架构

获取原文
获取原文并翻译 | 示例

摘要

As a promising architectural paradigm for applications which demand high I/O bandwidth, Processing-in-Memory (PIM) computing techniques have been adopted in designing Convolutional Neural Networks (CNNs). However, due to the notorious memory wall problem, PIM based on existing device memory still cannot deal with complex CNN applications under the constraints of memory bandwidth and processing latency. To mitigate this problem, this paper proposes an efficient PIM architecture based on skyrmion and domain-wall racetrack memories, which can further exploit the potential of PIM architectures in terms of processing latency and energy efficiency. By adopting full adders and multipliers developed using skyrmion and domain- wall nanowires, our proposed PIM architecture can accommodate complex CNNs at different scales. Experimental results show that comparing with both traditional and state-of-the-art PIM architectures, our proposed PIM architecture can improve the processing latency and energy efficiency of CNNs drastically.
机译:作为需要高I / O带宽的应用程序的有希望的体系结构范例,在设计卷积神经网络(CNN)时采用了内存处理(PIM)计算技术。但是,由于臭名昭著的内存壁问题,基于现有设备内存的PIM在内存带宽和处理延迟的约束下仍然无法处理复杂的CNN应用程序。为了缓解这个问题,本文提出了一种基于skyrmion和域壁赛道存储器的高效PIM体系结构,该体系结构可以在处理延迟和能效方面进一步挖掘PIM体系结构的潜力。通过采用使用skyrmion和畴壁纳米线开发的全加法器和乘法器,我们提出的PIM架构可以适应不同规模的复杂CNN。实验结果表明,与传统的PIM架构和最先进的PIM架构相比,我们提出的PIM架构可以大大改善CNN的处理延迟和能效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号