首页> 外文会议>Computational imaging IX >Study of Recognizing Human Motion Observed from an Arbitrary Viewpoint Based on Decomposition of a Tensor Containing Multiple View Motions
【24h】

Study of Recognizing Human Motion Observed from an Arbitrary Viewpoint Based on Decomposition of a Tensor Containing Multiple View Motions

机译:基于包含多个视点运动的张量的分解,识别从任意视点观察到的人体运动的研究

获取原文
获取原文并翻译 | 示例

摘要

We propose a Tensor Decomposition based algorithm that recognizes the observed action performed by an unknown person and unknown viewpoint not included in the database. Our previous research aimed motion recognition from one single viewpoint. In this paper, we extend our approach for human motion recognition from an arbitrary viewpoint. To achieve this issue, we set tensor database which are multi-dimensional vectors with dimensions corresponding to human models, viewpoint angles, and action classes. The value of a tensor for a given combination of human silhouette model, viewpoint angle, and action class is the series of mesh feature vectors calculated each frame sequence. To recognize human motion, the actions of one of the persons in the tensor are replaced by the synthesized actions. Then, the core tensor for the replaced tensor is computed. This process is repeated for each combination of action, person, and viewpoint. For each iteration, the difference between the replaced and original core tensors is computed. The assumption that gives the minimal difference is the action recognition result. The recognition results show the validity of our proposed method, the method is experimentally compared with Nearest Neighbor rule. Our proposed method is very stable as each action was recognized with over 75% accuracy.
机译:我们提出了一种基于张量分解的算法,该算法可识别数据库中未包含的未知人员和未知视点执行的观察到的动作。我们以前的研究仅从一个角度出发进行运动识别。在本文中,我们从任意角度扩展了人体运动识别方法。为了解决这个问题,我们将张量数据库设置为多维矢量,其维数与人体模型,视点角度和动作类别相对应。对于人体轮廓模型,视点角度和动作类别的给定组合,张量的值是每个帧序列计算出的一系列网格特征向量。为了识别人的动作,张量中一个人的动作被合成的动作代替。然后,计算替换张量的核心张量。对于动作,人和观点的每种组合都重复此过程。对于每次迭代,计算替换后的核心张量与原始核心张量之间的差。给出最小差异的假设是动作识别结果。识别结果表明了该方法的有效性,并与近邻法则进行了实验比较。我们提出的方法非常稳定,因为识别每个动作的准确性都超过75%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号