【24h】

Analytical review on Hadoop Distributed file system

机译:Hadoop分布式文件系统的分析回顾

获取原文
获取原文并翻译 | 示例

摘要

Hadoop Distributed file System is used for processing, storing and analyzing very large amount of unstructured data. It stores the data reliably and provides fault tolerance, fast and scalable access to the information. It is used with MapReduce, which is a programming model. HDFS and Map Reduce are the core components of Hadoop. Hadoop is a framework of tools for large scale computation and data processing of large data sets. As we know data and information is exponentially increasing in current era therefore the technology like Hadoop, Cassandra File System, etc became the preferred choice among the IT professionals and business communities. Hadoop Distributed File System is rapidly growing and proving itself as cutting edge technology in dealing with huge amount of structured and unstructured data. This paper includes step by step introduction to data management using file system, data management using RDBMS then need of Hadoop distributed file system, and its working process.
机译:Hadoop分布式文件系统用于处理,存储和分析大量非结构化数据。它可靠地存储数据,并提供容错能力,对信息的快速和可扩展访问。它与MapReduce(一种编程模型)一起使用。 HDFS和Map Reduce是Hadoop的核心组件。 Hadoop是用于大型数据集的大规模计算和数据处理的工具框架。众所周知,在当今时代,数据和信息呈指数级增长,因此像Hadoop,Cassandra File System等技术已成为IT专业人员和商业社区的首选。 Hadoop分布式文件系统正在迅速发展,并证明自己是处理大量结构化和非结构化数据的前沿技术。本文逐步介绍了使用文件系统进行数据管理,使用RDBMS进行数据管理以及Hadoop分布式文件系统的需求以及其工作过程。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号