首页> 外文会议>Computer vision, graphics and image processing >Tracking of Retinal Microsurgery Tools Using Late Fusion of Responses from Convolutional Neural Network over Pyramidally Decomposed Frames
【24h】

Tracking of Retinal Microsurgery Tools Using Late Fusion of Responses from Convolutional Neural Network over Pyramidally Decomposed Frames

机译:使用金字塔形分解框架上的卷积神经网络响应的后期融合跟踪视网膜显微外科工具。

获取原文
获取原文并翻译 | 示例

摘要

Computer vision and robotic assistance are increasingly being used to improve the quality of surgical interventions. Tool tracking becomes critical in interventions viz. endoscopy, laparoscopy and retinal microsurgery (RM) where unlike open surgery the surgeons do not have direct visual and physical access to the surgical site. RM is performed using miniaturized tools and requires careful observation through a surgical microscope by the surgeon. Tracking of surgical tools primarily provides robotic assistance during surgery and also serves as a means to assess the quality of surgery, which is extremely useful during surgical training. In this paper we propose a deep learning based visual tracking of surgical tool using late fusion of responses from convolutional neural network (CNN) which comprises of 3 steps: (i) training of CNN for localizing the tool tip on a frame (ii) coarsely estimating the tool tip region using the trained CNN and (iii) a finer search around the estimated region to accurately localize the tool tip. Scale invariant tracking of tool is ensured by incorporating multi-scale late fusion where the CNN responses are obtained at each level of the Gaussian scale decomposition pyramid. Performance of the proposed method is experimentally validated on the publicly available retinal microscopy instrument tracking (RMIT) dataset (https://sites.google. com/site/sznitr/code-and-datasets). Our method tracks tools with a maximum accuracy of 99.13% which substantiates the efficacy of the proposed method in comparison to existing approaches.
机译:计算机视觉和机器人辅助越来越多地用于提高手术干预的质量。工具跟踪对于干预即变得至关重要。内窥镜检查,腹腔镜检查和视网膜显微外科手术(RM)与开放手术不同,外科医生无法直接从视觉和物理上接近手术部位。 RM使用小型工具执行,并且需要外科医生通过手术显微镜进行仔细观察。跟踪手术工具主要是在手术过程中提供机器人帮助,也可以用作评估手术质量的手段,这在手术培训中非常有用。在本文中,我们提出了一种基于深度学习的手术工具视觉跟踪方法,它使用了来自卷积神经网络(CNN)的响应的后期融合,包括3个步骤:(i)训练CNN以将工具尖端定位在框架上(ii)粗略地使用经过训练的CNN估计工具提示区域,以及(iii)在估计区域周围进行更精细的搜索以准确定位工具提示。通过合并多尺度后期融合来确保工具的尺度不变跟踪,其中在高斯尺度分解金字塔的每个级别获得CNN响应。在公开的视网膜显微镜仪器跟踪(RMIT)数据集(https://sites.google。com / site / sznitr / code-and-datasets)上,实验验证了该方法的性能。我们的方法跟踪工具的最大精度为99.13%,与现有方法相比,该方法证实了所提出方法的有效性。

著录项

相似文献

  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号