...
首页> 外文期刊>Procedia Computer Science >Fetal Ultrasound Image Classification Using a Bag-of-words Model Trained on Sonographers’ Eye Movements
【24h】

Fetal Ultrasound Image Classification Using a Bag-of-words Model Trained on Sonographers’ Eye Movements

机译:使用超声医师眼动训练的词袋模型对胎儿超声图像进行分类

获取原文
           

摘要

The acquisition of fetal biometric measurements via 2-D B-Mode ultrasound (US) scans is crucial for fetal monitoring. However, acquiring standardised head, abdominal and femoral image planes is challenging due to variable image quality. There remains a significant discrepancy between the way automated computer vision algorithms and human sonographers perform this task; this paper contributes to the attempt to bridge this gap by building knowledge of US image perception into a pipeline for classifying images obtained during 2-D fetal US scans. We record the eye movements of 10 participants performing 4 2-D US scans each, on a phantom fetal model at varying orientations. We analyse their eye movements to establish which high-level constraints and visual cues are used to localise the standardised abdominal plane. We then build a vocabulary of visual words trained on SURF descriptors extracted around eye fixations, and use the resulting bag of words model to classify head, abdominal and femoral image frames acquired during 10 clinical US scans and 10 further phantom US scans. On phantom data, we achieve classification accuracies of 89%, 87% and 85% for the head, abdominal and femoral images respectively. On clinical data, we achieve classification accuracies of 76%, 68% and 64% for the head, abdominal and femoral images respectively. This constitutes the first insight into image perception during real time US scanning, and a proof of concept for training bag of words models for US image analysis on human eye movements.
机译:通过二维B型超声(US)扫描获取胎儿生物特征测量值对于胎​​儿监护至关重要。然而,由于可变的图像质量,获取标准化的头部,腹部和股骨图像平面是具有挑战性的。自动计算机视觉算法与人类超声检查人员执行此任务的方式之间仍然存在很大差异。本文通过将对美国图像感知的知识构建到用于对二维胎儿US扫描期间获得的图像进行分类的管道中,为弥合这一差距做出了贡献。我们在不同方向的幻影胎儿模型上记录了10位参与者进行4次2-D US扫描时的眼动。我们分析他们的眼睛运动,以确定哪些高级限制条件和视觉提示用于定位标准化的腹平面。然后,我们建立在眼睛注视周围提取的SURF描述符上训练的视觉单词词汇,并使用生成的单词袋模型对在10次临床US扫描和10次进一步的幻像US扫描期间获得的头部,腹部和股骨图像帧进行分类。在幻像数据上,我们对头部,腹部和股骨图像分别达到89%,87%和85%的分类精度。根据临床数据,对于头部,腹部和股骨图像,我们分别实现了76%,68%和64%的分类精度。这构成了对实时美国扫描过程中图像感知的第一洞察力,并且是训练用于人眼运动的美国图像分析的词袋模型概念验证。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号