【24h】

Future Frame Prediction for Robot-Assisted Surgery

机译:未来机器人辅助手术的帧预测

获取原文

摘要

Predicting future frames for robotic surgical video is an interesting, important yet extremely challenging problem, given that the operative tasks may have complex dynamics. Existing approaches on future prediction of natural videos were based on either deterministic models or stochastic models, including deep recurrent neural networks, optical flow, and latent space modeling. However, the potential in predicting meaningful movements of robots with dual arms in surgical scenarios has not been tapped so far, which is typically more challenging than forecasting independent motions of one arm robots in natural scenarios. In this paper, we propose a ternary prior guided variational autoen-coder (TPG-VAE) model for future frame prediction in robotic surgical video sequences. Besides content distribution, our model learns motion distribution, which is novel to handle the small movements of surgical tools. Furthermore, we add the invariant prior information from the gesture class into the generation process to constrain the latent space of our model. To our best knowledge, this is the first time that the future frames of dual arm robots are predicted considering their unique characteristics relative to general robotic videos. Experiments demonstrate that our model gains more stable and realistic future frame prediction scenes with the suturing task on the public JIGSAWS dataset.
机译:考虑到操作任务可能具有复杂的动态,预测机器人外科视频的未来框架是一个有趣的,重要但极具挑战性的问题。现有的未来预测自然视频的方法是基于确定性模型或随机模型,包括深度经常性神经网络,光学流量和潜空间建模。然而,到目前为止,目前尚未立即预测具有双臂的机器人有意义的机器人的潜力,这通常比预测自然情景中的一个手臂机器人的独立运动更具挑战性。在本文中,我们提出了一种用于组织外科视频序列的未来帧预测的三元先前引导变分自动编码器(TPG-VAE)模型。除了内容分布外,我们的模型还学习运动分布,这是处理手术工具的小型运动的新颖。此外,我们将来自手势类的不变性先前信息添加到生成过程中以限制我们模型的潜在空间。为了我们的最佳知识,这是第一次预测双臂机器人的未来框架,考虑到普通机器人视频的独特特征。实验表明,我们的模型在公共拼写数据集上与缝线任务进行了更稳定和现实的未来框架预测场景。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号