...
首页> 外文期刊>Information Sciences: An International Journal >Online human action recognition based on incremental learning of weighted covariance descriptors
【24h】

Online human action recognition based on incremental learning of weighted covariance descriptors

机译:基于加权协方差描述符增量学习的在线人体行动认可

获取原文
获取原文并翻译 | 示例
           

摘要

Different from traditional action recognition based on video segments, online action recognition aims to recognize actions from an unsegmented stream of data in a continuous manner. One approach to online recognition is based on accumulation of evidence over time. This paper presents an effective framework of such an approach to online action recognition from a stream of noisy skeleton data, using a weighted covariance descriptor as a means to accumulate information. In particular, a fast incremental updating method for the weighted covariance descriptor is developed. The weighted covariance descriptor takes the following principles into consideration: past frames have less contribution to the accumulated evidence and recent and informative frames such as key frames contribute more. To determine the discriminativeness of each frame, an effective pseudo-neutral pose is proposed to recover the neutral pose from an arbitrary pose in a frame. Two recognition methods are developed using the weighted covariance descriptor. The first method applies nearest neighbor search in a set of trained actions using a Riemannian metric of covariance matrices. The second method uses a Log-Euclidean kernel based SVM. Extensive experiments on MSRC-12 Kinect Gesture dataset, Online RGBD Action dataset, and our newly collected online action recognition dataset have demonstrated the efficacy of the proposed framework in terms of latency, miss rate and error rate. (C) 2018 Elsevier Inc. All rights reserved.
机译:与基于视频段的传统行动识别不同,在线行动识别旨在以连续方式识别来自未分段数据流的动作。在线识别的一种方法是基于随着时间的推移积累证据。本文使用加权协方差描述符作为累积信息的手段,提出了从嘈杂的骨架数据流中获得的在线动作识别的方法的有效框架。特别地,开发了一种用于加权协方差描述符的快速增量更新方法。加权协方差描述符考虑以下原则:过去的帧对累积的证据和近期和信息帧(如关键框架)的贡献更少。为了确定每个框架的鉴别性,提出了一种有效的伪空位姿势以从框架中的任意姿势恢复中性姿势。使用加权协方差描述符开发了两个识别方法。第一种方法使用协方差矩阵的riemannian度量来应用一组经过训练的操作中的最近的邻居搜索。第二种方法使用基于Log-euclidean内核的SVM。在MSRC-12 Kinect Gesture数据集,在线RGBD动作数据集和我们新收集的在线行动识别数据集上的广泛实验已经展示了所提出的框架在延迟,错过率和错误率方面的效果。 (c)2018年Elsevier Inc.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号