首页> 外文会议>Workshop on Algorithm Engineering and Experiments >Cache Replacement with Memory Allocation
【24h】

Cache Replacement with Memory Allocation

机译:缓存替换内存分配

获取原文

摘要

In the generalized caching problem, items can have varying costs and sizes. We consider a variant of this problem in which the cache management policy must not only specify which items to evict to make room for an incoming item (cache replacement), but must also specify a location in memory where each object can be placed contiguously (memory allocation). The problem is motivated by current implementations of key-value stores in large commercial databases with high read-towrite ratio such as those maintained by Facebook and Twitter. We propose a simple algorithm and show that if the algorithm is given some additional memory to account for fragmentation, it is competitive against an offline optimal algorithm that does not specify memory layout. (The optimal algorithm needs only to ensure that the sum of the sizes of the items in the cache does not exceed the total capacity of the cache). On the benchmark traces in the experiments presented here, our algorithm requires approximately 10{15% additional space to be k-competitive against the optimal offline algorithm. Through trace-driven simulations, we demonstrate that the caching performance of our algorithm for cache replacement with memory allocation is close to that of competitive strategies that are not required to manage memory layout within the cache.
机译:在广义缓存问题中,物品可以具有不同的成本和尺寸。我们考虑这个问题的变体,其中缓存管理策略不仅要指定要撤销的项目才能为传入项目进行空间(缓存替换),而且还必须在内存中指定每个对象可以连续放置的内存中的位置(内存分配)。该问题是通过具有高读取途径比率的大型商业数据库中的键值存储的当前实现而激励,例如由Facebook和Twitter维护的那些。我们提出了一种简单的算法,并表明,如果算法给出一些额外的内存以解释碎片,则对未指定内存布局的离线最佳算法具有竞争力。 (最佳算法仅需要确保缓存中项目的大小的总和不超过缓存的总容量)。在此处提供的实验中的基准迹线上,我们的算法需要大约10 {15%的额外空间才能与最佳离线算法竞争。通过跟踪驱动的模拟,我们证明了我们的高速缓存替换内存分配算法的缓存性能是接近的未在高速缓存中管理存储布局所需的竞争策略。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号