首页> 外文期刊>Microprocessors and microsystems >Processing data where it makes sense: Enabling in-memory computation
【24h】

Processing data where it makes sense: Enabling in-memory computation

机译:处理数据有意义的数据:启用内存计算

获取原文
获取原文并翻译 | 示例

摘要

Today's systems are overwhelmingly designed to move data to computation. This design choice goes directly against at least three key trends in systems that cause performance, scalability and energy bottlenecks: (1) data access from memory is already a key bottleneck as applications become more data-intensive and memory bandwidth and energy do not scale well, (2) energy consumption is a key constraint in especially mobile and server systems, (3) data movement is very expensive in terms of bandwidth, energy and latency, much more so than computation. These trends are especially severely-felt in the data-intensive server and energy-constrained mobile systems of today.At the same time, conventional memory technology is facing many scaling challenges in terms of reliability, energy, and performance. As a result, memory system architects are open to organizing memory in different ways and making it more intelligent, at the expense of higher cost. The emergence of 3D-stacked memory plus logic as well as the adoption of error correcting codes inside DRAM chips, and the necessity for designing new solutions to serious reliability and security issues, such as the RowHammer phenomenon, are an evidence of this trend.In this work, we discuss some recent research that aims to practically enable computation close to data. After motivating trends in applications as well as technology, we discuss at least two promising directions for processing-in-memory (PIM): (1) performing massively-parallel bulk operations in memory by exploiting the analog operational properties of DRAM, with low-cost changes, (2) exploiting the logic layer in 3D-stacked memory technology to accelerate important data-intensive applications. In both approaches, we describe and tackle relevant cross-layer research, design, and adoption challenges in devices, architecture, systems, and programming models. Our focus is on the development of in-memory processing designs that can be adopted in real computing platforms at low cost. (C) 2019 Published by Elsevier B.V.
机译:今天的系统压倒性地旨在将数据移动到计算。这种设计选择直接反对至少三个系统中的一个关键趋势,导致性能,可伸缩性和能量瓶颈:(1)来自内存的数据访问已经是一个关键的瓶颈,因为应用程序变得更加数据 - 密集型和内存带宽和能量不展示(2)能源消耗是特别是移动和服务器系统中的关键约束,(3)在带宽,能量和延迟方面非常昂贵的数据移动,远远超过计算。这些趋势在今天的数据密集型服务器和能量受限移动系统中特别严重。同时,传统的存储器技术在可靠性,能量和性能方面面临许多扩展挑战。因此,内存系统架构师将以不同方式组织内存并使其更加智能,以牺牲更高的成本。 3D堆叠存储器加上逻辑的出现以及DRAM芯片内部采用纠错码,以及为严重可靠性和安全问题设计新解决方案的必要性,例如Rowhammer现象,是这一趋势的证据。这项工作,我们讨论了最近的一些研究,旨在实际实现靠近数据的计算。在应用程序以及技术的趋势之后,我们讨论至少两个有希望的内存(PIM)的有希望的方向:(1)通过利用DRAM的模拟操作属性来执行存储器中的大规模平行批量操作,低 - 成本变化,(2)利用3D堆叠内存技术中的逻辑层,以加速重要的数据密集型应用。在这两种方法中,我们描述并解决了设备,架构,系统和编程模型中的相关跨层研究,设计和采用挑战。我们的重点是开发内存处理设计,可以低成本在真实计算平台中采用。 (c)2019年由elestvier b.v发布。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号