首页> 外文期刊>Microprocessors and microsystems >Processing data where it makes sense: Enabling in-memory computation
【24h】

Processing data where it makes sense: Enabling in-memory computation

机译:在有意义的地方处理数据:启用内存计算

获取原文
获取原文并翻译 | 示例

摘要

Today's systems are overwhelmingly designed to move data to computation. This design choice goes directly against at least three key trends in systems that cause performance, scalability and energy bottlenecks: (1) data access from memory is already a key bottleneck as applications become more data-intensive and memory bandwidth and energy do not scale well, (2) energy consumption is a key constraint in especially mobile and server systems, (3) data movement is very expensive in terms of bandwidth, energy and latency, much more so than computation. These trends are especially severely-felt in the data-intensive server and energy-constrained mobile systems of today.At the same time, conventional memory technology is facing many scaling challenges in terms of reliability, energy, and performance. As a result, memory system architects are open to organizing memory in different ways and making it more intelligent, at the expense of higher cost. The emergence of 3D-stacked memory plus logic as well as the adoption of error correcting codes inside DRAM chips, and the necessity for designing new solutions to serious reliability and security issues, such as the RowHammer phenomenon, are an evidence of this trend.In this work, we discuss some recent research that aims to practically enable computation close to data. After motivating trends in applications as well as technology, we discuss at least two promising directions for processing-in-memory (PIM): (1) performing massively-parallel bulk operations in memory by exploiting the analog operational properties of DRAM, with low-cost changes, (2) exploiting the logic layer in 3D-stacked memory technology to accelerate important data-intensive applications. In both approaches, we describe and tackle relevant cross-layer research, design, and adoption challenges in devices, architecture, systems, and programming models. Our focus is on the development of in-memory processing designs that can be adopted in real computing platforms at low cost. (C) 2019 Published by Elsevier B.V.
机译:当今的系统绝大多数都是为了将​​数据转移到计算中而设计的。这种设计选择直接与导致性能,可伸缩性和能源瓶颈的系统中的至少三个主要趋势背道而驰:(1)随着应用程序的数据密集程度越来越高以及内存带宽和能源无法很好地扩展,从内存访问数据已经成为关键瓶颈,(2)能耗是尤其是移动和服务器系统中的关键约束,(3)数据移动在带宽,能耗和延迟方面非常昂贵,远不止计算。这些趋势在当今的数据密集型服务器和能耗受限的移动系统中尤为严重。与此同时,常规存储技术在可靠性,能耗和性能方面正面临许多扩展挑战。结果,内存系统架构师愿意以不同的方式组织内存并使之更智能,但代价是更高的成本。 3D堆栈存储器加逻辑的出现以及DRAM芯片内部采用纠错码的出现,以及针对严重的可靠性和安全性问题(例如RowHammer现象)设计新解决方案的必要性,都证明了这一趋势。在这项工作中,我们讨论了一些最近的研究,旨在实际实现接近数据的计算。在激发了应用程序和技术的趋势之后,我们至少讨论了两个有前途的内存处理(PIM)方向:(1)通过利用DRAM的模拟操作特性在内存中执行大规模并行的批量操作。成本变化;(2)利用3D堆栈存储器技术中的逻辑层来加速重要的数据密集型应用程序。在这两种方法中,我们都描述并解决了设备,架构,系统和编程模型中相关的跨层研究,设计和采用方面的挑战。我们的重点是开发可在实际计算平台中以低成本采用的内存处理设计。 (C)2019由Elsevier B.V.发布

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号