首页> 外文期刊>The Visual Computer >Motion keypoint trajectory and covariance descriptor for human action recognition
【24h】

Motion keypoint trajectory and covariance descriptor for human action recognition

机译:用于人体动作识别的运动关键点轨迹和协方差描述符

获取原文
获取原文并翻译 | 示例

摘要

Human action recognition from videos is a challenging task in computer vision. In recent years, histogram-based descriptors that are calculated along dense trajectories have shown promising results for human action recognition, but they usually ignore motion information of the tracking points, and the relationship between different motion variables is not well utilized. To address this issue, we propose a motion keypoint trajectory (MKT) approach and a trajectory-based covariance (TBC) descriptor, which is calculated along the motion keypoint trajectories. The proposed MKT approach tracks motion keypoints at multiple spatial scales and employs an optical flow rectification algorithm to reduce the influence of camera motions and thus achieves better performance than the improved dense trajectory (IDT) approach well known in the literature. In particular, MKT is faster than IDT, because MKT does not need to use human detection and extracts fewer trajectories than IDT. Furthermore, the TBC descriptor outperforms the classical histogram-based descriptors, such as the Histogram of Oriented Gradient, Histogram of Optical Flow and Motion Boundary Histogram. Experimental results on three challenging datasets (i.e., Olympic Sports, HMDB51 and UCF50) demonstrate that our approach is able to achieve better recognition performances than a number of state-of-the-art approaches.
机译:从视频中识别人类动作是计算机视觉中的一项艰巨任务。近年来,沿着密集轨迹计算的基于直方图的描述符已显示出对人类动作识别有希望的结果,但是它们通常会忽略跟踪点的运动信息,并且不同的运动变量之间的关系没有得到很好的利用。为了解决此问题,我们提出了一种运动关键点轨迹(MKT)方法和基于轨迹的协方差(TBC)描述符,该描述符是沿着运动关键点轨迹计算的。所提出的MKT方法在多个空间尺度上跟踪运动关键点,并采用光流校正算法来减少相机运动的影响,因此与文献中众所周知的改进的密集轨迹(IDT)方法相比,可以获得更好的性能。特别是,MKT比IDT更快,因为MKT不需要使用人工检测,并且提取的轨迹比IDT少。此外,TBC描述符优于传统的基于直方图的描述符,例如定向梯度直方图,光流直方图和运动边界直方图。在三个具有挑战性的数据集(即Olympic Sports,HMDB51和UCF50)上的实验结果表明,与许多最新方法相比,我们的方法能够实现更好的识别性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号