首页> 外文会议>IEEE International Parallel Distributed Processing Symposium >High-throughput Analysis of Large Microscopy Image Datasets on CPU-GPU Cluster Platforms
【24h】

High-throughput Analysis of Large Microscopy Image Datasets on CPU-GPU Cluster Platforms

机译:CPU-GPU集群平台上的大型显微镜图像数据集的高通量分析

获取原文

摘要

Analysis of large pathology image datasets offers significant opportunities for the investigation of disease morphology, but the resource requirements of analysis pipelines limit the scale of such studies. Motivated by a brain cancer study, we propose and evaluate a parallel image analysis application pipeline for high throughput computation of large datasets of high resolution pathology tissue images on distributed CPU-GPU platforms. To achieve efficient execution on these hybrid systems, we have built runtime support that allows us to express the cancer image analysis application as a hierarchical data processing pipeline. The application is implemented as a coarse-grain pipeline of stages, where each stage may be further partitioned into another pipeline of fine-grain operations. The fine-grain operations are efficiently managed and scheduled for computation on CPUs and GPUs using performance aware scheduling techniques along with several optimizations, including architecture aware process placement, data locality conscious task assignment, data prefetching, and asynchronous data copy. These optimizations are employed to maximize the utilization of the aggregate computing power of CPUs and GPUs and minimize data copy overheads. Our experimental evaluation shows that the cooperative use of CPUs and GPUs achieves significant improvements on top of GPU-only versions (up to 1.6×) and that the execution of the application as a set of fine-grain operations provides more opportunities for runtime optimizations and attains better performance than coarser-grain, monolithic implementations used in other works. An implementation of the cancer image analysis pipeline using the runtime support was able to process an image dataset consisting of 36,848 4Kx4K-pixel image tiles (about 1.8TB uncompressed) in less than 4 minutes (150 tiles/second) on 100 nodes of a state-of-the-art hybrid cluster system.
机译:大型病理图像数据集的分析为疾病形态的研究提供了重要的机会,但是分析管道的资源需求限制了此类研究的规模。出于脑癌研究的动机,我们提出并评估了并行图像分析应用程序管道,用于在分布式CPU-GPU平台上对高分辨率病理组织图像的大型数据集进行高通量计算。为了在这些混合系统上实现高效执行,我们建立了运行时支持,该支持使我们能够将癌症图像分析应用程序表示为分层数据处理管道。该应用程序实现为阶段的粗粒度流水线,其中每个阶段可以进一步划分为另一个细粒度操作的流水线。使用性能感知的调度技术以及多项优化(包括体系结构感知的流程放置,数据局部性的任务分配,数据预取和异步数据复制),可以高效地管理细粒度的操作并为在CPU和GPU上的计算进行调度。这些优化用于最大程度地利用CPU和GPU的总计算能力,并最大程度地减少数据复制开销。我们的实验评估表明,CPU和GPU的协同使用在仅GPU版本(最大1.6倍)的基础上实现了重大改进,并且将应用程序作为一组细粒度操作的执行为运行时优化和优化提供了更多机会。与其他作品中使用的较粗粒度的整体实现相比,具有更好的性能。使用运行时支持的癌症图像分析管道的实现能够在不到4分钟(150瓦片/秒)的状态下,在一个状态的100个节点上处理由36,848个4Kx4K像素图像瓦片(约1.8TB未压缩)组成的图像数据集最先进的混合集群系统。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号