首页> 外文会议>Conference on Medical Imaging: Image-Guided Procedures, Robotic Interventions, and Modeling >Intraoperative Guidance of Orthopaedic Instruments Using 3D Correspondence of 2D Object Instance Segmentations
【24h】

Intraoperative Guidance of Orthopaedic Instruments Using 3D Correspondence of 2D Object Instance Segmentations

机译:使用2D对象实例分割的3D对应的骨科仪器的术语指导

获取原文

摘要

Purpose. Surgical placement of pelvic instrumentation is challenged by complex anatomy and narrow bone corridors, and despite heavy reliance on intraoperative tluoroscopy, trauma surgery lacks a reliable solution for 3D surgical navigation that is compatible with steep workflow requirements. We report a method that uses routinely acquired fluoroscopic images in standard workflow to automatically detect and localize orthopaedic instruments for 3D guidance. Methods. The proposed method detects, establishes correspondence of, and localizes orthopaedic devices from a pair of radiographs. Instrument detection uses Mask R-CNN for segmentation and keypoint detection, trained on 4000 cadaveric pelvic radiographs with simulated guidewires. Keypoints on individual images are corresponded using prior calibration of the imaging system to backproject, identify, and rank-order ray intersections. Estimation of 3D instrument tip and direction was evaluated on a cadaveric specimen and patient images from an IRB-approved clinical study. Results. The detection network successfully generalized to cadaver and clinical images, achieving 87% recall and 98% precision. Mean geometric accuracy in estimating instrument tip and direction was (1.9 ± 1.6) mm and (1.8 + 1.3)°, respectively. Simulation studies demonstrated 1.1 mm median error in 3D tip and 2.3° in 3D direction estimation. Preliminary tests in cadaver and clinical images show the basic feasibility of the overall approach. Conclusions. Experimental studies demonstrate the feasibility and highlight the potential of deep learning for 3D-2D registration of orthopaedic instruments as applied in fixation of pelvic fractures. The approach is compatible with routine orthopaedic workflow, does not require additional equipment (such as surgical trackers), uses common imaging equipment (mobile C-arm fluoroscopy), and does not require vendor-specific device models.
机译:目的。骨盆仪器的手术放置受复杂的解剖和窄骨走廊挑战,尽管依赖于术中的术术,但创伤手术缺乏与陡峭工作流程要求兼容的3D手术导航的可靠解决方案。我们报告了一种方法,该方法使用标准工作流程中经常获得的荧光透视图像,以自动检测和定位矫形器仪器以进行3D指导。方法。所提出的方法检测,建立与一对射线照相的对应关系和定位整形外科器件。仪器检测使用掩模R-CNN进行分割和关键点检测,培训在具有模拟导丝的4000个尸体骨盆射线照片上训练。使用成像系统的先前校准以反叠,识别和等级级射线交叉点对应于各个图像上的关键点。从IRB批准的临床研究中对尸体标本和患者图像评估了3D仪器尖端和方向的估计。结果。检测网络成功推广到尸体和临床图像,实现87%的召回和98%的精度。估计仪器尖端和方向的平均几何精度分别为(1.9±1.6)mm,(1.8 + 1.3)°。仿真研究在3D尖端中展示了1.1毫米的中值误差和3D方向估计中的2.3°。在尸体和临床形象中的初步测试显示了整体方法的基本可行性。结论。实验研究表明了可行性,突出了在骨盆骨折固定中应用的矫形器械3D-2D注册的深度学习的潜力。该方法与常规矫形工作流程兼容,不需要额外的设备(如手术跟踪器),使用常见的成像设备(移动C臂荧光透视),并且不需要特定于供应商的设备模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号