首页> 外文会议>Conference on Medical Imaging: Image-Guided Procedures, Robotic Interventions, and Modeling >Towards Augmented Reality-based Suturing in Monocular Laparoscopic Training
【24h】

Towards Augmented Reality-based Suturing in Monocular Laparoscopic Training

机译:在单眼腹腔镜训练中寻求基于增强现实的缝合

获取原文

摘要

Minimally Invasive Surgery (MIS) techniques have gained rapid popularity among surgeons since they offer significant clinical benefits including reduced recovery time and diminished post-operative adverse effects. However, conventional endoscopic systems output monocular video which compromises depth perception, spatial orientation and field of view. Suturing is one of the most complex tasks performed under these circumstances. Key components of this tasks are the interplay between needle holder and the surgical needle. Reliable 3D localization of needle and instruments in real time could be used to augment the scene with additional parameters that describe their quantitative geometric relation, e.g. the relation between the estimated needle plane and its rotation center and the instrument. This could contribute towards standardization and training of basic skills and operative techniques, enhance overall surgical performance, and reduce the risk of complications. The paper-proposes an Augmented Reality environment with quantitative and qualitative visual representations to enhance laparoscopic training outcomes performed on a silicone pad. This is enabled by a multi-task supervised deep neural network which performs multi-class segmentation and depth map prediction. Scarcity of labels has been conquered by creating a virtual environment which resembles the surgical training scenario to generate dense depth maps and segmentation maps. The proposed convolutional neural network was tested on real surgical training scenarios and showed to be robust to occlusion of the needle. The network achieves a dice score of 0.67 for surgical needle segmentation, 0.81 for needle holder instrument segmentation and a mean absolute error of 6.5 mm for depth estimation.
机译:微创手术(MIS)技术已在外科医师中迅速普及,因为它们具有显着的临床益处,包括缩短了恢复时间并减少了术后不良反应。然而,常规的内窥镜系统输出单眼视频,这损害了深度感知,空间取向和视野。缝合是在这种情况下执行的最复杂的任务之一。这项任务的关键组成部分是持针器和手术用针之间的相互作用。实时对针和仪器进行可靠的3D定位可用于通过描述其定量几何关系的其他参数来增强场景,例如估计的针平面及其旋转中心与仪器之间的关系。这可以有助于基本技能和手术技术的标准化和培训,提高整体手术性能,并降低并发症的风险。论文提出了一种具有定量和定性视觉表示的增强现实环境,以增强在硅胶垫上进行的腹腔镜训练的效果。这由执行多类分割和深度图预测的多任务监督深度神经网络启用。通过创建类似于外科手术场景的虚拟环境,可以生成密集的深度图和分割图,从而克服了标签的不足。所提出的卷积神经网络已在实际的外科手术训练场景中进行了测试,并显示出对扎针的鲁棒性。对于外科手术针分割,该网络的骰子得分为0.67,对于持针器器械分割的网络,骰子得分为0.81,对于深度估计,该分数达到6.5 mm的平均绝对误差。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号