首页> 外文会议>24th ACM international conference on supercomputing 2010 >InterferenceRemoval: Removing Interference of Disk Access for MPI Programs through Data Replication
【24h】

InterferenceRemoval: Removing Interference of Disk Access for MPI Programs through Data Replication

机译:InterferenceRemoval:通过数据复制消除MPI程序的磁盘访问干扰

获取原文
获取原文并翻译 | 示例

摘要

As the number of I/O-intensive MPI programs becomes increasingly large, many efforts have been made to improve I/O performance, on both software and architecture sides. On the software side, researchers can optimize processes' access patterns, either individually (e.g., by using large and sequential requests in each process), or collectively (e.g., by using collective I/O). On the architecture side, files are striped over multiple I/O nodes for a high aggregate I/O throughput. However, a key weakness, the access interference on each I/O node, remains unaddressed in these efforts. When requests from multiple processes are served simultaneously by multiple I/O nodes, one I/O node has to concurrently serve requests from different processes. Usually the I/O node stores its data on the hard disks, and different process accesses different regions of a data set. When there are . a burst of requests from multiple processes, requests from different processes to a disk compete with each other for its single disk head to access data. The disk efficiency can be significantly reduced due to frequent disk head seeks.rnIn this paper, we propose a scheme, InterferenceRemoval, to eliminate I/O interference by taking advantage of optimized access patterns and potentially high throughput provided by multiple I/O nodes. It identifies segments of files that could be involved in the interfering accesses and replicates them to their respectively designated I/O nodes. When the interference is detected at an I/O node, some I/O requests can be re-directed to the replicas on other I/O nodes, so that each I/O node only serves requests from one or a limited number of processes. InterferenceRemoval has been implemented in the MPI library for high portability on top of the Lustre parallel file system. Our experiments with representative benchmarks, such as NPB BTIO and mpi-tile-io, show that it can significantly improve I/O performance of MPI programs. For example, the I/O throughput of mpi-tile-io can be increased by 105% as compared to that without using collective I/O, and by 23% as compared to that using collective I/O.
机译:随着I / O密集型MPI程序的数量越来越大,已经在软件和体系结构方面做出了许多努力来提高I / O性能。在软件方面,研究人员可以单独(例如,通过在每个过程中使用大型顺序请求)或集体(例如,通过使用集体I / O)优化过程的访问模式。在体系结构方面,文件在多个I / O节点上进行条带化,以实现较高的聚合I / O吞吐量。但是,这些工作仍未解决关键弱点,即每个I / O节点上的访问干扰。当多个I / O节点同时处理来自多个进程的请求时,一个I / O节点必须同时服务于来自不同进程的请求。通常,I / O节点将其数据存储在硬盘上,并且不同的进程访问数据集的不同区域。有的时候。来自多个进程的请求爆发,来自不同进程的请求到磁盘相互竞争,以使其单个磁盘头访问数据。由于频繁的磁盘头查找,磁盘效率会大大降低。rn本文提出一种方案InterferenceRemoval,通过利用优化的访问模式和多个I / O节点提供的潜在高吞吐量来消除I / O干扰。它标识了干扰访问中可能涉及的文件段,并将它们复制到它们分别指定的I / O节点。当在I / O节点上检测到干扰时,可以将某些I / O请求重定向到其他I / O节点上的副本,以便每个I / O节点仅服务于一个或有限数量的进程的请求。 。已在MPI库中实现了InterferenceRemoval,以在Luster并行文件系统之上实现高可移植性。我们对具有代表性的基准(例如NPB BTIO和mpi-tile-io)进行的实验表明,它可以显着提高MPI程序的I / O性能。例如,与不使用集体I / O相比,mpi-tile-io的I / O吞吐量可以提高105%,与使用集体I / O的I / O吞吐量相比可以提高23%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号