首页> 外文会议>IEEE Infocom >THE CACHE INFERENCE PROBLEM and its Application to Content and Request Routing
【24h】

THE CACHE INFERENCE PROBLEM and its Application to Content and Request Routing

机译:缓存推理问题及其应用于内容和请求路由

获取原文

摘要

In many networked applications, independent caching agents cooperate by servicing each other's miss streams, without revealing the operational details of the caching mechanisms they employ. Inference of such details could be instrumental for many other processes. For example, it could be used for optimized forwarding (or routing) of one's own miss stream (or content) to available proxy caches, or for making cache-aware resource management decisions. In this paper, we introduce the Cache Inference Problem (CIP) as that of inferring the characteristics of a caching agent, given the miss stream of that agent. While CIP is insolvable in its most general form, there are special cases of practical importance in which it is, including when the request stream follows an Independent Reference Model (IRM) with generalized power-law (GPL) demand distribution. To that end, we design two basic "litmus" tests that are able to detect the LFU and LRU replacement policies, the effective size of the cache and of the object universe, and the skewness of the GPL demand for objects. Using extensive experiments under synthetic as well as real traces, we show that our methods infer such characteristics accurately and quite efficiently, and that they remain robust even when the IRM/GPL assumptions do not hold, and even when the underlying replacement policies are not "pure" LFU or LRU. We demonstrate the value of our inference framework by considering example applications.
机译:在许多联网应用中,独立的缓存代理通过为彼此的小姐的溪流提供服务,而不会揭示他们所采用的缓存机制的操作细节。这些细节的推理可能是诸多其他过程的乐器。例如,它可以用于对可用代理缓存的一个人自己的未命中流(或内容)的优化转发(或路由),或用于制作缓存感知资源管理决策。在本文中,我们介绍了Cache推断问题(CIP),因为考虑到该代理的小姐的流,推断缓存代理的特征。虽然CIP在其最常规形式中无法解决,但有特殊的实际重要性,其中包括当请求流遵循与广义权力法(GPL)需求分布的独立参考模型(IRM)。为此,我们设计了两个基本的“Litmus”测试,可以检测LFU和LRU替换政策,高速缓存的有效大小和对象宇宙,以及对象的GPL需求的歪曲。在合成和真实迹线下使用广泛的实验,我们表明我们的方法准确且非常有效地推断出这样的特性,即使当IRM / GPL的假设没有保持时,它们也仍然坚固,甚至潜在的替换政策不是“纯粹的“lfu或lru。我们通过考虑示例应用程序来展示推理框架的价值。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号