首页> 外文会议>International Conference on Soft Computing Systems >Experimental Study on Chunking Algorithms of Data Deduplication System on Large Scale Data
【24h】

Experimental Study on Chunking Algorithms of Data Deduplication System on Large Scale Data

机译:大规模数据数据重复数据删除系统散布算法的实验研究

获取原文

摘要

Data deduplication also known as data redundancy elimination is a technique for saving storage space. The data deduplication system is highly successful in backup storage environments. Large number of redundancies may exist in a backup storage environment. These redundancies can be eliminated by finding and comparing the fingerprints. This comparison of fingerprints may be done at the file level or splits the files to create chunks and done at the chunk level. The file level deduplication system leads poor results than the chunk level since it considers the entire file for finding hash value and eliminates only duplicate files. This paper focuses on the experimental study on various chunking algorithms since chunking plays a very important role in data redundancy elimination system.
机译:数据重复数据删除也称为数据冗余消除是一种用于保存存储空间的技术。数据重复数据删除系统在备份存储环境中非常成功。备份存储环境中可能存在大量冗余。可以通过查找和比较指纹来消除这些冗余。这种指纹比较可以在文件级别完成,或拆分文件以创建块并在块级别完成。文件级重复数据删除系统引导了比块级别的差异不佳,因为它考虑了用于查找散列值的整个文件并仅消除重复文件。本文重点介绍了对各种堆积算法的实验研究,因为分布在数据冗余消除系统中发挥着非常重要的作用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号