首页> 外文学位 >Dynamic volume rendering of functional medical data on dissimilar hardware platforms
【24h】

Dynamic volume rendering of functional medical data on dissimilar hardware platforms

机译:在不同的硬件平台上动态生成功能性医学数据

获取原文
获取原文并翻译 | 示例

摘要

In the last 30 years, medical imaging has become one of the most used diagnostic tools in the medical profession. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) technologies have become widely adopted because of their ability to capture the human body in a non-invasive manner. A volumetric dataset is a series of orthogonal 2D slices captured at a regular interval, typically along the axis of the body from the head to the feet. Volume rendering is a computer graphics technique that allows volumetric data to be visualized and manipulated as a single 3D object. Iso-surface rendering, image splatting, shear warp, texture slicing, and raycasting are volume rendering methods, each with associated advantages and disadvantages. Raycasting is widely regarded as the highest quality renderer of these methods.;Originally, CT and MRI hardware was limited to providing a single 3D scan of the human body. The technology has improved to allow a set of scans capable of capturing anatomical movements like a beating heart. The capturing of anatomical data over time is referred to as functional imaging.;Functional MRI (fMRI) is used to capture changes in the human body over time. While fMRI's can be used to capture any anatomical data over time, one of the more common uses of fMRI is to capture brain activity. The fMRI scanning process is typically broken up into a time consuming high resolution anatomical scan and a series of quick low resolution scans capturing activity. The low resolution activity data is mapped onto the high resolution anatomical data to show changes over time.;Academic research has advanced volume rendering and specifically fMRI volume rendering. Unfortunately, academic research is typically a one-off solution to a singular medical case or set of data, causing any advances to be problem specific as opposed to a general capability. Additionally, academic volume renderers are often designed to work on a specific device and operating system under controlled conditions. This prevents volume rendering from being used across the ever expanding number of different computing devices, such as desktops, laptops, immersive virtual reality systems, and mobile computers like phones or tablets.;This research will investigate the feasibility of creating a generic software capability to perform real-time 4D volume rendering, via raycasting, on desktop, mobile, and immersive virtual reality platforms. Implementing a GPU-based 4D volume raycasting method for mobile devices will harness the power of the increasing number of mobile computational devices being used by medical professionals. Developing support for immersive virtual reality can enhance medical professionals' interpretation of 3D physiology with the additional depth information provided by stereoscopic 3D. The results of this research will help expand the use of 4D volume rendering beyond the traditional desktop computer in the medical field.;Developing the same 4D volume rendering capabilities across dissimilar platforms has many challenges. Each platform relies on their own coding languages, libraries, and hardware support. There are tradeoffs between using languages and libraries native to each platform and using a generic cross-platform system, such as a game engine. Native libraries will generally be more efficient during application run-time, but they require different coding implementations for each platform. The decision was made to use platform native languages and libraries in this research, whenever practical, in an attempt to achieve the best possible frame rates.;4D volume raycasting provides unique challenges independent of the platform. Specifically, fMRI data loading, volume animation, and multiple volume rendering. Additionally, real-time raycasting has never been successfully performed on a mobile device. Previous research relied on less computationally expensive methods, such as orthogonal texture slicing, to achieve real-time frame rates. These challenges will be addressed as the contributions of this research.;The first contribution was exploring the feasibility of generic functional data input across desktop, mobile, and immersive virtual reality. To visualize 4D fMRI data it was necessary to build in the capability to read Neuroimaging Informatics Technology Initiative (NIfTI) files. The NIfTI format was designed to overcome limitations of 3D file formats like DICOM and store functional imagery with a single high-resolution anatomical scan and a set of low-resolution anatomical scans. Allowing input of the NIfTI binary data required creating custom C++ routines, as no object oriented APIs freely available for use existed. The NIfTI input code was built using C++ and the C++ Standard Library to be both light weight and cross-platform.;Multi-volume rendering is another challenge of fMRI data visualization and a contribution of this work. fMRI data is typically broken into a single high-resolution anatomical volume and a series of low-resolution volumes that capture anatomical changes. Visualizing two volumes at the same time is known as multi-volume visualization. Therefore, the ability to correctly align and scale the volumes relative to each other was necessary. It was also necessary to develop a compositing method to combine data from both volumes into a single cohesive representation.;Three prototype applications were built for the different platforms to test the feasibility of 4D volume raycasting. One each for desktop, mobile, and virtual reality. Although the backend implementations were required to be different between the three platforms, the raycasting functionality and features were identical. Therefore, the same fMRI dataset resulted in the same 3D visualization independent of the platform itself. Each platform uses the same NIfTI data loader and provides support for dataset coloring and windowing (tissue density manipulation). The fMRI data can be viewed changing over time by either animation through the time steps, like a movie, or using an interface slider to "scrub" through the different time steps of the data.;The prototype applications data load times and frame rates were tested to determine if they achieved the real-time interaction goal. Real-time interaction was defined by achieving 10 frames per second (fps) or better, based on the work of Miller [1]. The desktop version was evaluated on a 2013 MacBook Pro running OS X 10.12 with a 2.6 GHz Intel Core i7 processor, 16 GB of RAM, and a NVIDIA GeForce GT 750M graphics card. The immersive application was tested in the C6 CAVE(TM), a 96 graphics node computer cluster comprised of NVIDIA Quadro 6000 graphics cards running Red Hat Enterprise Linux. The mobile application was evaluated on a 2016 9.7" iPad Pro running iOS 9.3.4. The iPad had a 64-bit Apple A9X dual core processor with 2 GB of built in memory.;Two different fMRI brain activity datasets with different voxel resolutions were used as test datasets. Datasets were tested using both the 3D structural data, the 4D functional data, and a combination of the two. Frame rates for the desktop implementation were consistently above 10 fps, indicating that real-time 4D volume raycasting is possible on desktop hardware. The mobile and virtual reality platforms were able to perform real-time 3D volume raycasting consistently. This is a marked improvement for 3D mobile volume raycasting that was previously only able to achieve under one frame per second [2]. Both VR and mobile platforms were able to raycast the 4D only data at real-time frame rates, but did not consistently meet 10 fps when rendering both the 3D structural and 4D functional data simultaneously. However, 7 frames per second was the lowest frame rate recorded, indicating that hardware advances will allow consistent real-time raycasting of 4D fMRI data in the near future.
机译:在过去的30年中,医学成像已成为医学界最常用的诊断工具之一。计算机断层扫描(CT)和磁共振成像(MRI)技术已经被广泛采用,因为它们能够以非侵入性的方式捕获人体。体积数据集是按规则间隔捕获的一系列正交2D切片,通常沿着从头到脚的身体轴。体绘制是一种计算机图形技术,可将体积数据可视化并作为单个3D对象进行操作。等值面渲染,图像喷溅,剪切翘曲,纹理切片和射线投射是体积渲染方法,每种方法都有各自的优点和缺点。射线广播被广泛认为是这些方法中质量最高的渲染器。最初,CT和MRI硬件仅限于提供对人体的单3D扫描。该技术已得到改进,可以进行一组扫描,以捕获像跳动的心脏这样的解剖运动。随着时间的过去捕获解剖数据被称为功能成像。功能MRI(fMRI)用于捕获人体随时间的变化。尽管功能磁共振成像可用于捕获一段时间内的任何解剖学数据,但功能磁共振成像的更常见用途之一是捕获大脑活动。 fMRI扫描过程通常分为耗时的高分辨率解剖扫描和一系列捕获活动的快速低分辨率扫描。低分辨率活动数据被映射到高分辨率解剖数据上以显示随时间的变化。学术研究具有先进的体绘制,特别是fMRI体绘制。不幸的是,学术研究通常是针对单个医疗案例或数据集的一次性解决方案,从而导致任何进步都是特定于问题的,而不是一般的能力。此外,学术体量渲染器通常被设计为在受控条件下在特定设备和操作系统上工作。这可以防止在越来越多的不同计算设备(例如台式机,笔记本电脑,沉浸式虚拟现实系统以及手机或平板电脑等移动计算机)中使用体绘制。这项研究将调查创建通用软件功能以实现以下目的的可行性:在桌面,移动和沉浸式虚拟现实平台上通过射线广播执行实时4D体绘制。为移动设备实现基于GPU的4D体积光线投射方法将利用医疗专业人员正在使用的越来越多的移动计算设备的力量。开发对沉浸式虚拟现实的支持可以通过立体3D提供的其他深度信息来增强医疗专业人员对3D生理学的解释。这项研究的结果将有助于在医疗领域将4D体积渲染的使用范围扩展到传统台式计算机之外。在不同平台上开发相同的4D体积渲染功能面临许多挑战。每个平台都依赖于自己的编码语言,库和硬件支持。在使用每个平台固有的语言和库与使用通用的跨平台系统(例如游戏引擎)之间需要权衡取舍。本地库通常在应用程序运行时会更高效,但是它们需要为每个平台使用不同的编码实现。在此研究中,决定在可行的情况下尽可能使用平台本地语言和库,以试图获得最佳的帧速率。4D体积射线广播提出了与平台无关的独特挑战。特别是fMRI数据加载,体积动画和多体积渲染。此外,从未在移动设备上成功执行实时射线广播。先前的研究依赖于计算成本较低的方法(例如正交纹理切片)来实现实时帧速率。这些挑战将作为本研究的贡献来解决。;第一个贡献是探索跨桌面,移动和沉浸式虚拟现实进行通用功能数据输入的可行性。为了使4D fMRI数据可视化,必须增强读取神经影像信息技术计划(NIfTI)文件的功能。 NIfTI格式旨在克服DICOM等3D文件格式的限制,并通过单个高分辨率解剖扫描和一组低分辨率解剖扫描存储功能图像。允许输入NIfTI二进制数据需要创建自定义C ++例程,因为不存在免费提供的面向对象的API。 NIfTI输入代码是使用C ++和C ++标准库构建的,既轻巧又跨平台。多体积渲染是fMRI数据可视化的另一个挑战,也是这项工作的贡献。 fMRI数据通常分为单个高分辨率解剖体和捕获解剖变化的一系列低分辨率体。同时可视化两个体积称为多体积可视化。因此,必须具有正确对齐和缩放体积的能力。还必须开发一种合成方法,以将两个体积中的数据组合成一个单一的内聚表示形式。;为不同的平台构建了三个原型应用程序,以测试4D体积光线投射的可行性。分别适用于台式机,移动设备和虚拟现实。尽管在三个平台之间要求后端实现不同,但是光线投射功能和功能是相同的。因此,相同的fMRI数据集导致相同的3D可视化,而与平台本身无关。每个平台都使用相同的NIfTI数据加载器,并支持数据集着色和加窗(组织密度处理)。可以通过动画在整个时间步长(例如电影)中使用动画或使用界面滑块“擦洗”数据的不同时间步长来查看fMRI数据随时间的变化。;原型应用程序的数据加载时间和帧速率分别为测试以确定他们是否达到了实时互动目标。根据Miller [1]的工作,实时交互是通过达到每秒10帧(fps)或更高的速度来定义的。在运行OS X 10.12的2013 MacBook Pro上评估了台式机版本,该操作系统具有2.6 GHz Intel Core i7处理器,16 GB RAM和NVIDIA GeForce GT 750M显卡。身临其境的应用程序已在C6 CAVE™(包含96个图形节点的计算机集群)中进行了测试,该集群由运行Red Hat Enterprise Linux的NVIDIA Quadro 6000图形卡组成。该移动应用程序在运行iOS 9.3.4的2016年9.7英寸iPad Pro上进行了评估。iPad具有64位Apple A9X双核处理器和2 GB内置内存。;两个不同的具有不同体素分辨率的fMRI脑活动数据集是用作测试数据集。使用3D结构数据,4D功能数据以及两者的组合对数据集进行了测试,桌面实现的帧频始终高于10 fps,这表明可以在3D结构上进行实时4D体积射线广播移动和虚拟现实平台能够始终如一地执行实时3D体积射线广播,这是对3D移动体积射线广播的显着改进,以前只能以每秒一帧的速度实现[2]。移动平台能够以实时帧速率仅射线播放4D数据,但在同时渲染3D结构数据和4D功能数据时并不能始终满足10 fps的要求,但是7帧每秒是记录的最低帧速率,这表明硬件的进步将允许在不久的将来对4D fMRI数据进行一致的实时射线广播。

著录项

  • 作者

    Holub, Joseph Scott.;

  • 作者单位

    Iowa State University.;

  • 授予单位 Iowa State University.;
  • 学科 Computer engineering.
  • 学位 Ph.D.
  • 年度 2016
  • 页码 168 p.
  • 总页数 168
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号