首页> 外文会议>Design, Automation and Test in Europe Conference and Exhibition >Fast and Accurate DRAM Simulation: Can we Further Accelerate it?
【24h】

Fast and Accurate DRAM Simulation: Can we Further Accelerate it?

机译:快速准确的DRAM仿真:我们可以进一步加速吗?

获取原文

摘要

The simulation of Dynamic Random Access Memories (DRAMs) in a system context requires highly accurate models due to the complex timing and power behavior of DRAMs. However, cycle accurate DRAM models often become the bottleneck regarding the overall simulation time. Therefore, fast but accurate DRAM simulation models are mandatory. This paper proposes two new performance optimized DRAM models that further accelerate the simulation speed with only a negligible degradation in accuracy. The first model is an enhanced Transaction Level Model (TLM), which uses a look-up table to accelerate parts of the simulation that feature a high memory access density for online scenarios. The second model is a neural network based simulator for offline trace analysis. We show a mathematical methodology to generate the inputs for the Look-Up Table (LUT) and an optimized artificial training set for the neural network. The enhanced TLM model is up to 5 times faster compared to a state-of-the-art TLM DRAM simulator. The neural network is able to speed up the simulation up to a factor of 10×, while inferring on a GPU. Both solutions provide only a slight decrease in accuracy of approximately 5%.
机译:由于DRAM的复杂时序和功率行为,因此在系统环境中对动态随机存取存储器(DRAM)进行仿真需要高度准确的模型。但是,周期精确的DRAM模型通常成为整个仿真时间的瓶颈。因此,必须有快速而准确的DRAM仿真模型。本文提出了两个新的性能优化的DRAM模型,它们可以进一步加快仿真速度,而精度的下降却可以忽略不计。第一个模型是增强的事务级别模型(TLM),该模型使用查找表来加速部分模拟操作,这些部分具有用于在线方案的高内存访问密度。第二个模型是用于脱机跟踪分析的基于神经网络的模拟器。我们展示了一种数学方法,可以为查找表(LUT)生成输入,并为神经网络提供优化的人工训练集。与最新的TLM DRAM仿真器相比,增强型TLM模型的速度提高了5倍。神经网络能够在GPU上进行推断的同时,将仿真速度提高10倍。两种解决方案的准确性仅略微降低了5%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号