首页> 外文会议>IEEE International Symposium on Computer-Based Medical Systems >A Deep Clustering Method For Analyzing Uterine Cervix Images Across Imaging Devices
【24h】

A Deep Clustering Method For Analyzing Uterine Cervix Images Across Imaging Devices

机译:一种深入聚类方法,用于分析成像装置的子宫子宫颈图像

获取原文

摘要

Visual inspection of the cervix with acetic acid (VIA), though error prone, has long been used for screening women and to guide management for cervical cancer. The automated visual evaluation (AVE) technique, in which deep learning is used to predict precancer based on a digital image of the acetowhitened cervix, has demonstrated its promise as a low-cost method to improve on human performance. However, there are several challenges in moving AVE beyond proof-of-concept and deploying it as a practical adjunct tool in visual screening. One of them is making AVE robust across images captured using different devices. We propose a new deep learning based clustering approach to investigate whether the images taken by three different devices (a common smartphone, a custom smartphone-based handheld device for cervical imaging, and a clinical colposcope equipped with SLR digital camera-based imaging capability) can be well distinguished from each other with respect to the visual appearance/content within their cervix regions. We argue that disparity in visual appearance of a cervix across devices could be a significant confounding factor in training and generalizing AVE performance. Our method consists of four components: cervix region detection, feature extraction, feature encoding, and clustering. Multiple experiments are conducted to demonstrate the effectiveness of each component and compare alternative methods in each component. Our proposed method achieves high clustering accuracy (97%) and significantly outperforms several representative deep clustering methods on our dataset. The high clustering performance indicates the images taken from these three devices are different with respect to visual appearance. Our results and analysis establish a need for developing a method that minimizes such variance among the images acquired from different devices. It also recognizes the need for large number of training images from different sources for robust device-independent AVE performance worldwide.
机译:使用醋酸(通孔)的宫颈视觉检查虽然易于出错,但长期以来一直用于筛查妇女并指导宫颈癌管理。自动视觉评估(AVE)技术,其中深度学习用于基于aceTowhited子宫颈的数字图像预测Precance,已经证明了其承诺作为提高人类性能的低成本方法。然而,在超越概念上移动,并且将其作为视觉筛选中的实用附件工具部署存在若干挑战。其中一个是在使用不同设备捕获的图像上进行AVE强大。我们提出了一种新的基于深度学习的聚类方法来调查三种不同的设备(公共智能手机,用于颈椎成像的自定义智能手机的手持设备,以及配备SLR数码相机的成像能力的临床电镀)可以在其子宫颈区内的视觉外观/内容方面彼此相处得很好。我们认为,在跨越设备的视觉外观中的视觉外观可能是培训和概括AVE性能的显着混杂因素。我们的方法由四个组件组成:Cervix区域检测,功能提取,特征编码和聚类。进行多个实验以证明每个组分的有效性并比较每个组分中的替代方法。我们所提出的方法实现了高聚类精度(97%)并显着优于我们的数据集上几种代表性的深度聚类方法。高聚类性能指示从这三个设备拍摄的图像相对于视觉外观不同。我们的结果和分析建立了开发一种方法,该方法可以最大限度地减少从不同设备获取的图像之间的这种方差。它还认识到需要来自不同来源的大量培训图像,以实现全球鲁棒设备无关的AVE性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号