首页> 外文会议>IEEE International Conference on Multimedia and Expo >Learning informative pairwise joints with energy-based temporal pyramid for 3D action recognition
【24h】

Learning informative pairwise joints with energy-based temporal pyramid for 3D action recognition

机译:通过基于能量的时间金字塔学习知识丰富的成对关节以进行3D动作识别

获取原文

摘要

This paper presents an effective local spatial-temporal descriptor for action recognition from skeleton sequences. The unique property of our descriptor is that it takes the spatial-temporal discrimination and action speed variations into account, intending to solve the problems of distinguishing similar actions and identifying actions with different speeds in one goal. The entire algorithm consists of two stages. First, a frame selection method is used to remove noisy skeletons for a given skeleton sequence. From the selected skeletons, skeleton joints are mapped to a high dimensional space, where each point refers to kinematics, time label and joint label of a skeleton joint. To encode relative relationships among joints, pairwise points from the space are then jointly mapped to a new space, where each point encodes the relative relationships of skeleton joints. Second, Fisher Vector (FV) is employed to encode all points from the new space as a compact feature representation. To cope with speed variations in actions, an energy-based temporal pyramid is applied to form a multi-temporal FV representation, which is fed into a kernel-based extreme learning machine classifier for recognition. Extensive experiments on benchmark datasets consistently show that our method outperforms state-of-the-art approaches for skeleton-based action recognition.
机译:本文提出了一种有效的局部时空描述符,用于从骨骼序列中识别动作。我们的描述符的独特之处在于它考虑了时空区分和动作速度变化,旨在解决在一个目标中区分相似动作和识别不同速度动作的问题。整个算法包括两个阶段。首先,使用帧选择方法去除给定骨架序列的嘈杂骨架。从选定的骨骼中,骨骼关节被映射到高维空间,其中每个点都指骨骼关节的运动学,时间标签和关节标签。为了对关节之间的相对关系进行编码,该空间中的成对点然后被共同映射到一个新的空间,其中每个点都对骨架关节的相对关系进行编码。其次,采用Fisher向量(FV)对新空间中的所有点进行编码,作为紧凑的特征表示。为了应对动作中的速度变化,基于能量的时间金字塔被应用以形成多时间FV表示,该表示被馈送到基于内核的极限学习机分类器中进行识别。在基准数据集上进行的大量实验一致表明,我们的方法优于基于骨架的动作识别的最新方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号