In the generalized caching problem, items can have varying costs and sizes. We consider a variant of this problem in which the cache management policy must not only specify which items to evict to make room for an incoming item (cache replacement), but must also specify a location in memory where each object can be placed contiguously (memory allocation). The problem is motivated by current implementations of key-value stores in large commercial databases with high read-towrite ratio such as those maintained by Facebook and Twitter. We propose a simple algorithm and show that if the algorithm is given some additional memory to account for fragmentation, it is competitive against an offline optimal algorithm that does not specify memory layout. (The optimal algorithm needs only to ensure that the sum of the sizes of the items in the cache does not exceed the total capacity of the cache). On the benchmark traces in the experiments presented here, our algorithm requires approximately 10{15% additional space to be k-competitive against the optimal offline algorithm. Through trace-driven simulations, we demonstrate that the caching performance of our algorithm for cache replacement with memory allocation is close to that of competitive strategies that are not required to manage memory layout within the cache.
展开▼