首页> 外文会议>4th annual petascale data storage workshop 2009 >Mixing Hadoop and HPC Workloads on Parallel Filesystems
【24h】

Mixing Hadoop and HPC Workloads on Parallel Filesystems

机译:在并行文件系统上混合Hadoop和HPC工作负载

获取原文
获取原文并翻译 | 示例

摘要

MapReduce-tailored distributed filesystems-such as HDFS for Hadoop MapReduce-and parallel high-performance computing filesystems are tailored for considerably different workloads. The purpose of our work is to examine the performance of each filesys-tem when both sorts of workload run on it concurrently.rnWe examine two workloads on two filesystems. For the HPC workload, we use the IOR checkpointing benchmark and the Parallel Virtual File System, Version 2 (PVFS); for Hadoop, we use an HTTP attack classifier and the CloudStore filesystem. We analyze the performance of each file system when it concurrently runs its "native" workload as well as the non-native workload.
机译:MapReduce量身定制的分布式文件系统(例如,用于Hadoop MapReduce的HDFS)和并行高性能计算文件系统针对不同的工作负载量身定制。我们的工作目的是在两种工作负载同时运行时检查每个文件系统的性能。我们检查两个文件系统上的两个工作负载。对于HPC工作负载,我们使用IOR检查点基准和并行虚拟文件系统第2版(PVFS)。对于Hadoop,我们使用HTTP攻击分类器和CloudStore文件系统。我们分析每个文件系统同时运行其“本机”工作负载和非本机工作负载时的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号