【24h】

Adaptive pipeline for deduplication

机译:自适应重复数据删除管道

获取原文
获取原文并翻译 | 示例

摘要

Deduplication has become one of the hottest topics in the field of data storage. Quite a few methods towards reducing disk I/O caused by deduplication have been proposed. Some methods also have been studied to accelerate computational sub-tasks in deduplication. However, the order of computational sub-tasks can affect overall deduplication throughput significantly, because computational sub-tasks exhibit quite different workload and concurrency in different orders and with different data sets. This paper proposes an adaptive pipelining model for the computational sub-tasks in deduplication. It takes both data type and hardware platform into account. Taking the compression ratio and the duplicate ratio of the data stream, and the compression speed and the fingerprinting speed on different processing units as parameters, it determines the optimal order of the pipeline stages (computational sub-tasks) and assigns each stage to the processing unit which processes it fastest. That is, “adaptive” refers to both data adaptive and hardware adaptive. Experimental results show that the adaptive pipeline improves the deduplication throughput up to 50% compared with the plain fixed pipeline, which implies that it is suitable for simultaneous deduplication of various data types on modern heterogeneous multi-core systems.
机译:重复数据删除已成为数据存储领域中最热门的主题之一。已经提出了许多减少由重复数据删除引起的磁盘I / O的方法。还研究了一些方法来加速重复数据删除中的计算子任务。但是,计算子任务的顺序会显着影响整体重复数据删除的吞吐量,因为计算子任务在不同的顺序和不同的数据集下表现出完全不同的工作量和并发性。本文针对重复数据删除中的计算子任务提出了一种自适应流水线模型。它同时考虑了数据类型和硬件平台。以数据流的压缩率和重复率以及不同处理单元上的压缩速度和指纹识别速度为参数,确定流水线阶段(计算子任务)的最佳顺序并将每个阶段分配给处理处理最快的单元。即,“自适应”是指数据自适应和硬件自适应。实验结果表明,与普通固定管线相比,自适应管线将重复数据删除吞吐量提高了50%,这表明它适合于现代异构多核系统上各种数据类型的同时重复数据删除。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号