...
首页> 外文期刊>IEICE transactions on information and systems >Fogcached: A DRAM/NVMM Hybrid KVS Server for Edge Computing
【24h】

Fogcached: A DRAM/NVMM Hybrid KVS Server for Edge Computing

机译:fogcached:用于边缘计算的DRAM / NVMM混合kVS服务器

获取原文

摘要

With the development of IoT devices and sensors, edge computing is leading towards new services like autonomous cars and smart cities. Low-latency data access is an essential requirement for such services, and a large-capacity cache server is needed on the edge side. However, it is not realistic to build a large capacity cache server using only DRAM because DRAM is expensive and consumes substantially large power. A hybrid main memory system is promising to address this issue, in which main memory consists of DRAM and non-volatile memory. It achieves a large capacity of main memory within the power supply capabilities of current servers. In this paper, we propose Fogcached, that is, the extension of a widely-used KVS (Key-Value Store) server program (i.e., Memcached) to exploit both DRAM and non-volatile main memory (NVMM). We used Intel Optane DCPM as NVMM for its prototype. Fogcached implements a Dual-LRU (Least Recently Used) mechanism that seamlessly extends the memory management of Memcached to hybrid main memory. Fogcached reuses the segmented LRU of Memcached to manage cached objects in DRAM, adds another segmented LRU for those in DCPM and bridges the LRUs by a mechanism to automatically replace cached objects between DRAM and DCPM. Cached objects are autonomously moved between the two memory devices according to their access frequencies. Through experiments, we confirmed that Fogcached improved the peak value of a latency distribution by about 40% compared to Memcached.
机译:随着IOT设备和传感器的发展,边缘计算导致自动汽车和智能城市等新服务。低延迟数据访问是此类服务的基本要求,并且在边缘侧需要大容量的缓存服务器。但是,仅使用DRAM构建大容量高速缓存服务器并不现实,因为DRAM昂贵并且消耗大量的功率。混合主要内存系统是有希望解决此问题的,其中主内存包括DRAM和非易失性存储器。它在当前服务器的电源功能内实现了大量主存储器。在本文中,我们提出了FOGCACHED,即扩展广泛使用的KVS(键值存储)服务器程序(即,MEMCACHED)来利用DRAM和非易失性主存储器(NVMM)。我们将英特尔Optane DCPM用作其原型的NVMM。 FOGCACHED实现了一种双LRU(最近使用最近使用的)机制,可以无缝地扩展MEMCACHED的内存管理到混合主存储器。 FOGCACHED重新使用MEMCACHED的分段LRU以管理DRAM中的缓存对象,为DCPM中的那些添加另一个分段LRU,并通过机制桥接LRU,以便在DRAM和DCPM之间自动替换缓存对象。缓存对象根据其访问频率在两个存储器设备之间自动移动。通过实验,与MEMCACHED相比,我们确认FOGCACHED改善了延迟分布的峰值值约为40%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号