【24h】

Surgical Gesture Classification from Video Data

机译:视频数据中的手术手势分类

获取原文

摘要

Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on kinematic and dynamic cues, such as time to completion, speed, forces, torque, or robot trajectories. In this paper we show that in a typical surgical training setup, video data can be equally discriminative. To that end, we propose and evaluate three approaches to surgical gesture classification from video. In the first one, we model each video clip from each surgical gesture as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words and use a bag-of-features (BoF) approach to classify new video clips. In the third approach, we use multiple kernel learning to combine the LDS and BoF approaches. Our experiments show that methods based on video data perform equally well as the state-of-the-art approaches based on kinematic data.
机译:机器人手术中手势和技能的自动分类的许多现有工作都基于运动学和动态线索,例如完成时间,速度,力,扭矩或机器人轨迹。在本文中,我们表明,在典型的外科手术训练设置中,视频数据可以同等地区分。为此,我们提出并评估了三种从视频进行手术姿势分类的方法。在第一个视频中,我们将来自每个手术手势的每个视频剪辑建模为线性动力系统(LDS)的输出,并使用LDS空间中的指标对新的视频剪辑进行分类。在第二篇文章中,我们使用从每个视频片段中提取的时空特征来学习时空词词典,并使用特征包(BoF)方法对新的视频片段进行分类。在第三种方法中,我们使用多种内核学习来结合LDS和BoF方法。我们的实验表明,基于视频数据的方法的效果与基于运动学数据的最新方法一样好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号