...
首页> 外文期刊>IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control >Deep Learning to Obtain Simultaneous Image and Segmentation Outputs From a Single Input of Raw Ultrasound Channel Data
【24h】

Deep Learning to Obtain Simultaneous Image and Segmentation Outputs From a Single Input of Raw Ultrasound Channel Data

机译:深度学习,从原始超声通道数据的单个输入中同时获得图像和分割输出

获取原文
获取原文并翻译 | 示例

摘要

Single plane wave transmissions are promising for automated imaging tasks requiring high ultrasound frame rates over an extended field of view. However, a single plane wave insonification typically produces suboptimal image quality. To address this limitation, we are exploring the use of deep neural networks (DNNs) as an alternative to delay-and-sum (DAS) beamforming. The objectives of this work are to obtain information directly from raw channel data and to simultaneously generate both a segmentation map for automated ultrasound tasks and a corresponding ultrasound B-mode image for interpretable supervision of the automation. We focus on visualizing and segmenting anechoic targets surrounded by tissue and ignoring or deemphasizing less important surrounding structures. DNNs trained with Field II simulations were tested with simulated, experimental phantom, and amp;italicamp;in vivoamp;/italicamp; data sets that were not included during training. With unfocused input channel data (i.e., prior to the application of receive time delays), simulated, experimental phantom, and amp;italicamp;in vivoamp;/italicamp; test data sets achieved mean ± standard deviation Dice similarity coefficients of 0.92 ± 0.13, 0.92 ± 0.03, and 0.77 ± 0.07, respectively, and generalized contrast-to-noise ratios (gCNRs) of 0.95 ± 0.08, 0.93 ± 0.08, and 0.75 ± 0.14, respectively. With subaperture beamformed channel data and a modification to the input layer of the DNN architecture to accept these data, the fidelity of image reconstruction increased (e.g., mean gCNR of multiple acquisitions of two amp;italicamp;in vivoamp;/italicamp; breast cysts ranged 0.89–0.96), but DNN display frame rates were reduced from 395 to 287 Hz. Overall, the DNNs successfully translated feature representations learned from simulated data to phantom and amp;italicamp;in vivoamp;/italicamp; data, which is promising for this novel approach to simultaneous ultrasound image formation and segmentation.
机译:单平面波传输对于需要在扩展视场内实现高超声帧速率的自动成像任务很有前途。然而,单平面波谐波通常会产生次优的图像质量。为了解决这一限制,我们正在探索使用深度神经网络 (DNN) 作为延迟求和 (DAS) 波束成形的替代方案。这项工作的目标是直接从原始通道数据中获取信息,并同时生成用于自动超声任务的分割图和相应的超声 B 模式图像,用于对自动化进行可解释的监督。我们专注于可视化和分割被组织包围的消声目标,而忽略或淡化不太重要的周围结构。使用 Field II 模拟训练的 DNN 使用模拟、实验模型和斜体体内/斜体数据集进行了测试,这些数据集在训练期间未包含在内。使用无聚焦输入通道数据(即,在应用接收时间延迟之前)、模拟、实验模型和斜体体内/斜体测试数据集实现了平均±标准差骰子相似系数分别为 0.92 ± 0.13、0.92 ± 0.03 和 0.77 ± 0.07,广义对比噪比 (gCNR) 分别为 0.95 ± 0.08、0.93 ± 0.08 和 0.75 ± 0.14, 分别。通过子孔径波束成形通道数据和对 DNN 架构输入层的修改以接受这些数据,图像重建的保真度提高了(例如,多次采集两个斜体体内/斜体乳腺囊肿的平均 gCNR 范围为 0.89–0.96),但 DNN 显示帧速率从 395 Hz 降低到 287 Hz。 总体而言, DNN成功地将从模拟数据中学习到的特征表示转化为Phantom和斜体体内/斜体数据,这对于这种同时进行超声图像形成和分割的新方法很有希望。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号