首页> 外文会议>2017 IEEE International Symposium on Workload Characterization >Memory requirements of hadoop, spark, and MPI based big data applications on commodity server class architectures
【24h】

Memory requirements of hadoop, spark, and MPI based big data applications on commodity server class architectures

机译:商品服务器类架构上基于hadoop,spark和MPI的大数据应用程序的内存需求

获取原文
获取原文并翻译 | 示例

摘要

Emerging big data frameworks requires computational resources and memory subsystems that can naturally scale to manage massive amounts of diverse data. Given the large size and heterogeneity of the data, it is currently unclear whether big data frameworks such as Hadoop, Spark, and MPI will require high performance and large capacity memory to cope with this change and exactly what role main memory subsystems will play; particularly in terms of energy efficiency. The primary purpose of this study is to answer these questions through empirical analysis of different memory configurations available on commodity hardware and to assess the impact of these configurations on the performance and power of these well-established frameworks. Our results reveal that while for Hadoop there is no major demand for high-end DRAM, Spark and MPI iterative tasks (e.g. machine learning) are benefiting from a high-end DRAM; in particular high frequency and large numbers of channels. Among the configurable parameters, our results indicate that increasing the number of DRAM channels reduces DRAM power and improves the energy-efficiency across all three frameworks.
机译:新兴的大数据框架需要计算资源和内存子系统,这些资源和内存子系统可以自然扩展以管理大量不同数据。考虑到数据的大小和异构性,目前尚不清楚诸如Hadoop,Spark和MPI之类的大数据框架是否需要高性能和大容量的内存来应对这种变化,以及主内存子系统究竟将扮演什么角色;特别是在能源效率方面。这项研究的主要目的是通过对商品硬件上可用的不同内存配置的经验分析来回答这些问题,并评估这些配置对这些完善框架的性能和功能的影响。我们的结果表明,尽管对于Hadoop而言,高端DRAM并没有主要需求,但Spark和MPI迭代任务(例如机器学习)正从高端DRAM中受益。特别是高频和大量频道。在可配置的参数中,我们的结果表明,增加所有DRAM通道的数量会降低DRAM的功耗,并提高所有三个框架的能效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号