首页> 外文会议>2017 4th International Conference on Signal Processing, Computing and Control >Genetic optimized data deduplication for distributed big data storage systems
【24h】

Genetic optimized data deduplication for distributed big data storage systems

机译:针对分布式大数据存储系统进行遗传优化的重复数据删除

获取原文
获取原文并翻译 | 示例

摘要

Content-Defined Chunking (CDC) detect maximum redundancy in data deduplication systems in the past years. In this research work, we focus on optimizing the deduplication system by adjusting the pertinent factors in content defined chunking (CDC) to identify as the key ingredients by declaring chunk cut-points and efficient fingerprint lookup using bucket based index partitioning. For efficient chunking, we propose Genetic Evolution (GE) algorithm based approach which is optimized Two Thresholds Two Divisors (TTTD-P) CDC algorithm where we significantly reduce the number of computing operations by using single dynamic optimal parameter divisor D with optimal threshold value exploiting the multi-operations nature of TTTD. To reduce the chunk-size variance, TTTD algorithm introduces an additional backup divisor D' that has a higher probability of finding cut-points. However, adding an additional divisor decreases the chunking throughput, meaning that TTTD algorithm aggravates Rabin's CDC performance bottleneck. To this end, Asymmetric Extremum (AE) significantly improves chunking throughput while providing comparable deduplication efficiency by using the local extreme value in a variable-sized asymmetric window to overcome the Rabin, MAXP and TTTD boundaries-shift problem. FAST CDC in the year 2016 is about 10 times faster than unimodal Rabin CDC and about 3 times faster than Gear and Asymmetric Extremum (AE) CDC, while achieving nearby the same deduplication ratio (DR). Therefore, we propose GE based TTTD-P optimized chunking to maximize chunking throughput with increased DR; and bucket indexing approach reduces hash values judgement time to identify and declare redundant chunk about 16 times than unimodal baseline Rabin CDC, 5 times than AE CDC, 1.6 times than FAST CDC. Our experimental results comparative analysis reveals that TTTD-P using fast BUZ rolling hash function with bucket indexing on Hadoop Distributed File System (HDFS) provide a comparatively maximum redundancy detection with higher throughput, higher deduplication ratio, lesser computation time and very low hash values comparison time as being best data deduplication for distributed big data storage systems.
机译:内容定义块(CDC)在过去几年中检测到重复数据删除系统中的最大冗余。在这项研究工作中,我们致力于通过调整内容定义的数据块(CDC)中的相关因素来优化重复数据删除系统,从而通过声明数据块切点和使用基于存储桶的索引分区进行有效的指纹查找来识别为关键成分。为了进行有效的分块,我们提出了一种基于遗传进化(GE)算法的方法,该方法对两个阈值两个除数(TTTD-P)CDC算法进行了优化,在该算法中,我们通过使用具有最佳阈值利用功能的单个动态最佳参数除数D来显着减少计算操作的数量TTTD的多业务性质。为了减少块大小的差异,TTTD算法引入了一个额外的备用除数D',该备用除数D'具有更高的发现切点的可能性。但是,添加额外的除数会降低分块吞吐量,这意味着TTTD算法加剧了Rabin的CDC性能瓶颈。为此,通过在可变大小的不对称窗口中使用局部极值来解决Rabin,MAXP和TTTD边界偏移问题,不对称极值(AE)极大地提高了分块吞吐量,同时提供了相当的重复数据删除效率。 2016年的FAST CDC比单峰Rabin CDC快约10倍,比Gear和非对称极值(AE)CDC快约3倍,同时实现了几乎相同的重复数据删除率(DR)。因此,我们提出基于GE的TTTD-P优化分块,以在增加DR的情况下最大化分块吞吐量。桶索引方法减少了哈希值判断和声明冗余块的哈希时间,比单峰基线Rabin CDC约高16倍,比AE CDC约5倍,比FAST CDC约1.6倍。我们的实验结果比较分析表明,在Hadoop分布式文件系统(HDFS)上使用快速BUZ滚动哈希函数和存储桶索引的TTTD-P提供了相对最大的冗余检测,具有更高的吞吐量,更高的重复数据删除率,更少的计算时间和非常低的哈希值比较时间是分布式大数据存储系统的最佳重复数据删除技术。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号