首页> 外文期刊>Combinatorics, probability & computing: CPC >Optimizing LRU caching for variable document sizes
【24h】

Optimizing LRU caching for variable document sizes

机译:针对可变文档大小优化LRU缓存

获取原文
获取原文并翻译 | 示例
           

摘要

We analyse a class of randomized Least Recently Used (LRU) cache replacement algorithms under the independent reference model with generalized Zipf's law request probabilities. The randomization was recently proposed for Web caching as a mechanism that discriminates between different document sizes. In particular, the cache maintains an ordered list of documents in the following way. When a document of size s is requested and found in the cache, then with probability p(s) it is moved to the front of the cache; otherwise the cache stays unchanged. Similarly, if the requested document of size s is not found in the cache, the algorithm places it with probability p(s) to the front of the cache or leaves the cache unchanged with the complementary probability (1 - p(s)). The successive randomized decisions are independent and the corresponding success probabilities p(s) are completely determined by the size of the currently requested document. In the case of a replacement, the necessary number of documents that are least recently moved to the front of the cache are removed in order to accommodate the newly placed document.In this framework, we provide explicit asymptotic characterization of the cache fault probability. Using the derived result we prove that the asymptotic performance of this class of algorithms is optimized when the randomization probabilities are chosen to be inversely proportional to document sizes. In addition, for this optimized and easy-to-implement policy, we show that its performance is within a constant factor from the optimal static algorithm.
机译:我们分析了独立参考模型下具有广义Zipf定律请求概率的一类随机最近最少使用(LRU)缓存替换算法。最近有人提出将随机化用于Web缓存,作为一种区分不同文档大小的机制。特别地,高速缓存以以下方式维护文档的有序列表。当请求并在高速缓存中找到大小为s的文档时,则以概率p(s)将其移到高速缓存的前面;否则,缓存保持不变。类似地,如果在高速缓存中未找到大小为s的请求文档,则算法以概率p(s)将其放置在高速缓存的前面,或者使高速缓存保持不变,且具有互补概率(1-p(s))。接连的随机决策是独立的,并且相应的成功概率p(s)完全由当前请求的文档的大小确定。在替换的情况下,为了容纳新放置的文档,删除了最近最少需要移动到缓存前端的必要数量的文档。在此框架中,我们提供了缓存故障概率的显式渐近表征。使用推导的结果,我们证明当选择随机化概率与文档大小成反比时,此类算法的渐近性能得到了优化。此外,对于这种优化且易于实现的策略,我们表明其性能与最佳静态算法相比处于恒定因素之内。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号