首页> 外文会议>ACM international conference on supercomputing >InterferenceRemoval: Removing Interference of Disk Access for MPI Programs through Data Replication
【24h】

InterferenceRemoval: Removing Interference of Disk Access for MPI Programs through Data Replication

机译:干涉缺点:通过数据复制去除MPI程序的干扰

获取原文

摘要

As the number of I/O-intensive MPI programs becomes increasingly large, many efforts have been made to improve I/O performance, on both software and architecture sides. On the software side, researchers can optimize processes' access patterns, either individually (e.g., by using large and sequential requests in each process), or collectively (e.g., by using collective I/O). On the architecture side, files are striped over multiple I/O nodes for a high aggregate I/O throughput. However, a key weakness, the access interference on each I/O node, remains unaddressed in these efforts. When requests from multiple processes are served simultaneously by multiple I/O nodes, one I/O node has to concurrently serve requests from different processes. Usually the I/O node stores its data on the hard disks, and different process accesses different regions of a data set. When there are . a burst of requests from multiple processes, requests from different processes to a disk compete with each other for its single disk head to access data. The disk efficiency can be significantly reduced due to frequent disk head seeks. In this paper, we propose a scheme, InterferenceRemoval, to eliminate I/O interference by taking advantage of optimized access patterns and potentially high throughput provided by multiple I/O nodes. It identifies segments of files that could be involved in the interfering accesses and replicates them to their respectively designated I/O nodes. When the interference is detected at an I/O node, some I/O requests can be re-directed to the replicas on other I/O nodes, so that each I/O node only serves requests from one or a limited number of processes. InterferenceRemoval has been implemented in the MPI library for high portability on top of the Lustre parallel file system. Our experiments with representative benchmarks, such as NPB BTIO and mpi-tile-io, show that it can significantly improve I/O performance of MPI programs. For example, the I/O throughput of mpi-tile-io can be increased by 105% as compared to that without using collective I/O, and by 23% as compared to that using collective I/O.
机译:随着I / O密集型MPI程序的数量越来越大,已经在软件和架构方面提高了许多努力来提高I / O性能。在软件方面,研究人员可以单独地优化流程'访问模式(例如,通过在每个过程中使用大型和顺序请求),或者通过使用集体I / O)来优化进程的访问模式。在架构方面,文件在多个I / O节点上划分,以获得高汇总I / O吞吐量。但是,在这些努力中,每个I / O节点上的关键弱点,对每个I / O节点上的访问干扰仍然没有压制。当由多个I / O节点同时服务来自多个进程的请求时,一个I / O节点必须同时使用来自不同进程的请求。通常,I / O节点在硬盘上存储其数据,并且不同的进程访问数据集的不同区域。有什么时候。从多个进程的一个请求突发,从不同进程到磁盘的请求彼此竞争其单个磁盘头以访问数据。由于频繁的磁盘头试图,磁盘效率可能会显着降低。在本文中,我们提出了一种方案,干涉验证,通过利用多个I / O节点提供的优化访问模式和可能的高吞吐量来消除I / O干扰。它标识可以参与干扰访问的文件的段,并将其分别对其分别指定的I / O节点进行复制。当在I / O节点处检测到干扰时,可以将一些I / O请求重新指向到其他I / O节点上的副本,使得每个I / O节点仅服务于来自一个或有限数量的处理的请求。在Lustle PanileS文件系统之上的MPI库中已在MPI库中实现了干涉阶段。我们与代表性基准的实验,如NPB BTIO和MPI-Tile-IO,表明它可以显着提高MPI程序的I / O性能。例如,与使用集体I / O相比,MPI-Tile-IO的I / O吞吐量可以增加105%,而无需使用集体I / O,并且比使用集体I / O相比达23%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号