首页> 外文会议>International Conference on Big Data, Small Data, Linked Data and Open Data >Flexible Management of Data Nodes for Hadoop Distributed File System
【24h】

Flexible Management of Data Nodes for Hadoop Distributed File System

机译:灵活管理Hadoop分布式文件系统的数据节点

获取原文

摘要

Hadoop Distributed File System (HDFS) is a file system, which stores big data in a distributed manner. Although HDFS cluster provides a great scalability, it requires numerous dedicated data nodes, which makes it difficult for a small business enterprise to construct a big data system. This paper presents a novel mechanism for flexible management of data nodes in the HDFS cluster. A block replication scheme is also presented to ensure availability of data. Using the proposed scheme, storage capacity of HDFS cluster can be dynamically increased by using existing hardware systems.
机译:Hadoop分布式文件系统(HDFS)是一种文件系统,其以分布式方式存储大数据。虽然HDFS群集提供了很大的可扩展性,但它需要许多专用数据节点,这使得一个小型企业构建大数据系统难以构建大数据系统。本文提出了一种灵活管理HDFS集群中数据节点的新机制。还提出了块复制方案以确保数据的可用性。使用所提出的方案,可以使用现有硬件系统动态地增加HDFS群集的存储容量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号