首页> 美国卫生研究院文献>other >Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service
【2h】

Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service

机译:用于医学图像处理即服务的云工程原理和技术支持

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based “medical image processing-as-a-service” offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop’s distributed file system. Despite this promise, HBase’s load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice).This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage.
机译:传统的基于实验室的内部医学影像研究使用分层数据结构(例如NFS文件存储)或数据库(例如COINS,XNAT)进行存储和检索。但是,这些方法所产生的性能受到标准网络交换机的阻碍,因为即使在中等规模的研究中,它们也会在从存储到处理节点的传输过程中饱和网络带宽。为此,基于云的“医学图像处理即服务”为利用Apache Hadoop生态系统提供了希望,该生态系统是一个灵活的框架,提供分布式,可伸缩,容错存储和并行计算模块以及HBase,这是在Hadoop分布式文件系统之上构建的NoSQL数据库。尽管有这样的希望,HBase的区域分割和合并的负载分配策略不利于成像数据的分层组织(例如,项目,主题,会话,扫描,切片)。本文通过描述关键的云工程做出了两个贡献来解决这些问题我们对用于医疗成像应用程序的Apache Hadoop生态系统进行了原理和技术增强。首先,我们为HBase提出了行键设计,这是由成像数据的层次结构驱动的必要步骤。其次,我们在HBase中提出了一种新颖的数据分配策略,以强力实施与层次相关的成像数据的配置。所提出的增强功能通过最小化网络使用并将处理本地化到已经存在数据的计算机来加速数据处理。此外,我们的方法适用于传统的扫描,主题和项目级别的分析程序,并且与标准命令行/可编写脚本的图像处理软件兼容。实验性图像数据样本的实验结果表明,与默认的HBase区域拆分策略相比,我们的新HBase策略将传统DICOM转换为NiFTI文件格式的时间缩短了三倍,而与之相比,改进了近六倍。即使是相对较小的文件集,也是一种通用的网络文件系统(NFS)方法。此外,文件访问延迟比网络连接存储低。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号