首页> 外文会议>IEEE International Conference on Cloud Computing >CAPI-Flash Accelerated Persistent Read Cache for Apache Cassandra
【24h】

CAPI-Flash Accelerated Persistent Read Cache for Apache Cassandra

机译:适用于Apache Cassandra的CAPI-Flash加速持久读取缓存

获取原文

摘要

In real-world NoSQL deployments, users have to trade off CPU, memory, I/O bandwidth and storage space to achieve the required performance and efficiency goals. Data compression is a vital component to improve storage space efficiency, but reading compressed data increases response time. Therefore, compressed data stores rely heavily on using the memory as a cache to speed up read operations. However, as large DRAM capacity is expensive, NoSQL databases have become costly to deploy and hard to scale. In our work, we present a persistent caching mechanism for Apache Cassandra on a high-throughput, low-latency FPGA-based NVMe Flash accelerator (CAPI-Flash), replacing Cassandra's in-memory cache. Because flash is dramatically less expensive per byte than DRAM, our caching mechanism provides Apache Cassandra with access to a large caching layer at lower cost. The experimental results show that for read-intensive workloads, our caching layer provides up to 85% improved throughput and also reduces CPU usage by 25% compared to default Cassandra.
机译:在实际的NoSQL部署中,用户必须权衡CPU,内存,I / O带宽和存储空间才能达到所需的性能和效率目标。数据压缩是提高存储空间效率的重要组成部分,但是读取压缩数据会增加响应时间。因此,压缩数据存储严重依赖于使用内存作为高速缓存来加快读取操作。但是,由于较大的DRAM容量很昂贵,因此NoSQL数据库的部署成本很高且难以扩展。在我们的工作中,我们在高吞吐量,低延迟,基于FPGA的NVMe Flash加速器(CAPI-Flash)上为Apache Cassandra提供了持久的缓存机制,以取代Cassandra的内存中缓存。由于闪存的每字节成本比DRAM便宜得多,因此我们的缓存机制使Apache Cassandra能够以较低的成本访问大型缓存层。实验结果表明,对于读取密集型工作负载,与默认的Cassandra相比,我们的缓存层可提高高达85%的吞吐量,并减少25%的CPU使用率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号