...
首页> 外文期刊>Pattern Recognition: The Journal of the Pattern Recognition Society >Robust human activity recognition from depth video using spatiotemporal multi-fused features
【24h】

Robust human activity recognition from depth video using spatiotemporal multi-fused features

机译:利用时空多融合功能从深度视频识别强大的人类活动

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

The recently developed depth imaging technologies have provided new directions for human activity recognition (HAR) without attaching optical markers or any other motion sensors to human body parts. In this paper, we propose novel multi-fused features for online human activity recognition (HAR) system that recognizes human activities from continuous sequences of depth map. The proposed online HAR system segments human depth silhouettes using temporal human motion information as well as it obtains human skeleton joints using spatiotemporal human body information. Then, it extracts the spatiotemporal multi-fused features that concatenate four skeleton joint features and one body shape feature. Skeleton joint features include the torso-based distance feature (DT), the key joint-based distance feature (DK), the spatiotemporal magnitude feature (M) and the spatiotemporal directional angle feature (theta). The body shape feature called HOG-DDS represents the projections of the depth differential silhouettes (DDS) between two consecutive frames onto three orthogonal planes by the histogram of oriented gradients (HOG) format. The size of the proposed spatiotemporal multi-fused feature is reduced by a code vector in the code book which is generated by vector quantization method. Then, it trains the hidden Markov model (HMM) with the code vectors of the multi-fused features and recognizes the segmented human activity by the forward spotting scheme using the trained HMM-based human activity classifiers. The experimental results on three challenging depth video datasets such as IM-Daily-DepthActivity, MSRAction3D and MSRDailyActivity3D demonstrate that the proposed online HAR method using the proposed multi-fused features outperforms the state-of-the-art HAR methods in terms of recognition accuracy. (C) 2016 Elsevier Ltd. All rights reserved.
机译:最近发展起来的深度成像技术为人类活动识别(HAR)提供了新的方向,而无需将光学标记或任何其他运动传感器连接到人体部位。在本文中,我们提出了一种新的多融合特征用于在线人类活动识别(HAR)系统,该系统从连续的深度图序列中识别人类活动。所提出的在线HAR系统利用时间人体运动信息分割人体深度轮廓,并利用时空人体信息获取人体骨骼关节。然后,提取连接四个骨骼关节特征和一个体形特征的时空多融合特征。骨骼关节特征包括基于躯干的距离特征(DT)、基于关键关节的距离特征(DK)、时空幅度特征(M)和时空方向角特征(θ)。被称为HOG-DDS的体形特征通过方向梯度直方图(HOG)格式将两个连续帧之间的深度差分轮廓(DDS)投影到三个正交平面上。通过矢量量化方法生成码本中的码向量,减小了时空多融合特征的大小。然后,利用多个融合特征的编码向量训练隐马尔可夫模型(HMM),并使用训练好的基于HMM的人类活动分类器,通过前向定位方案识别分段的人类活动。在三个具有挑战性的深度视频数据集(如IM Daily DepthActivity、MSRAction3D和MSRDailyActivity3D)上的实验结果表明,使用所提出的多融合特征的在线HAR方法在识别精度方面优于最先进的HAR方法。(C) 2016爱思唯尔有限公司版权所有。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号