首页> 外文会议>International conference on brain-inspired cognitive systems >Hadoop Massive Small File Merging Technology Based on Visiting Hot-Spot and Associated File Optimization
【24h】

Hadoop Massive Small File Merging Technology Based on Visiting Hot-Spot and Associated File Optimization

机译:基于访问热点和关联文件优化的Hadoop大规模小文件合并技术

获取原文

摘要

Hadoop Distributed File System (HDFS) is designed to reliably storage and manage large-scale files. All the files in HDFS are managed by a single server, the NameNode. The NameNode stores metadata, in its main memory, for each file stored into HDFS. HDFS suffers the penalty of performance with increased number of small files. It imposes a heavy burden to the NameNode to store and manage a mass of small files. The number of files that can be stored into HDFS is constrained by the size of NameNode's main memory. In order to improve the efficiency of storing and accessing the small files on HDFS, we propose Small Hadoop Distributed File System (SHDFS), which bases on original HDFS. Compared to original HDFS, we add two novel modules in the proposed SHDFS: merging module and caching module. In merging module, the correlated files model is proposed, which is used to find out the correlated files by user-based collaborative filtering and then merge correlated files into a single large file to reduce the total number of files. In caching module, we use Log - linear model to dig out some hot-spot data that user frequently access to, and then design a special memory subsystem to cache these hot-spot data. Caching mechanism speeds up access to hot-spot data. The experimental results indicate that SHDFS is able to reduce the metadata footprint on NameNode's main memory and also improve the efficiency of storing and accessing large number of small files.
机译:Hadoop分布式文件系统(HDFS)旨在可靠地存储和管理大型文件。 HDFS中的所有文件都由单个服务器NameNode进行管理。 NameNode将元数据存储在其主存储器中,用于存储到HDFS中的每个文件。随着小文件数量的增加,HDFS受到性能损失。它给NameNode带来了沉重的负担,以存储和管理大量的小文件。可存储到HDFS中的文件数受NameNode主内存大小的限制。为了提高在HDFS上存储和访问小文件的效率,我们提出了基于原始HDFS的小型Hadoop分布式文件系统(SHDFS)。与原始HDFS相比,我们在建议的SHDFS中添加了两个新颖的模块:合并模块和缓存模块。在合并模块中,提出了相关文件模型,该模型用于通过基于用户的协同过滤查找相关文件,然后将相关文件合并为单个大文件以减少文件总数。在缓存模块中,我们使用Log-线性模型来挖掘出用户经常访问的一些热点数据,然后设计一个特殊的内存子系统来缓存这些热点数据。缓存机制可加快对热点数据的访问。实验结果表明,SHDFS可以减少NameNode主内存上的元数据占用空间,并可以提高存储和访问大量小文件的效率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号