首页> 外文会议>Computer Vision (ICCV), 2011 IEEE International Conference on >Action recognition in cluttered dynamic scenes using Pose-Specific Part Models
【24h】

Action recognition in cluttered dynamic scenes using Pose-Specific Part Models

机译:使用特定姿势模型在杂乱的动态场景中进行动作识别

获取原文
获取原文并翻译 | 示例

摘要

We present an approach to recognizing single actor human actions in complex backgrounds. We adopt a Joint Tracking and Recognition approach, which track the actor pose by sampling from 3D action models. Most existing such approaches require large training data or MoCAP to handle multiple viewpoints, and often rely on clean actor silhouettes. The action models in our approach are obtained by annotating keyposes in 2D, lifting them to 3D stick figures and then computing the transformation matrices between the 3D keypose figures. Poses sampled from coarse action models may not fit the observations well; to overcome this difficulty, we propose an approach for efficiently localizing a pose by generating a Pose-Specific Part Model (PSPM) which captures appropriate kinematic and occlusion constraints in a tree-structure. In addition, our approach also does not require pose silhouettes. We show improvements to previous results on two publicly available datasets as well as on a novel, augmented dataset with dynamic backgrounds.
机译:我们提出了一种在复杂背景下识别单个演员的人类行为的方法。我们采用联合跟踪和识别方法,该方法通过从3D动作模型中采样来跟踪演员的姿势。现有的大多数此类方法都需要大量的训练数据或MoCAP来处理多个视点,并且通常依赖于干净的演员剪影。我们的方法中的动作模型是通过在2D中标注关键姿势,将其提升到3D简笔画然后计算3D关键人物之间的转换矩阵而获得的。从粗糙行动模型中采样的姿势可能不太适合观察结果;为了克服这个困难,我们提出了一种通过生成特定姿势部分模型(PSPM)来有效地定位姿势的方法,该模型捕获了树结构中的适当运动学和遮挡约束。此外,我们的方法也不需要姿势轮廓。我们在两个公开可用的数据集以及具有动态背景的新颖的扩充数据集上显示了对先前结果的改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号