首页> 外文会议>IEEE Winter Conference on Applications of Computer Vision >Global Co-occurrence Feature Learning and Active Coordinate System Conversion for Skeleton-based Action Recognition
【24h】

Global Co-occurrence Feature Learning and Active Coordinate System Conversion for Skeleton-based Action Recognition

机译:基于骨架动作识别的全局同现特征学习和主动坐标系转换

获取原文

摘要

Skeleton-based action recognition has attracted more and more attention in recent years. Besides, the rapid development of deep learning has greatly improved the performance. However, the current exploration of action co-occurrence is still not comprehensive enough. Most existing works only mine co-occurrence features from the temporal or spatial domain seperately, and it’s common to combine them in the end. Different from previous works, our approach is able to learn temporal and spatial co-occurrence features integratedly and globally, which is called spatio-temporal-unit feature enhancement (STUFE). In order to better align the skeleton data, we introduce a novel method for skeleton data preprocessing called active coordinate system conversion (ACSC). A coordinate system can be learned automatically to transform skeleton samples for alignment. By the way, the proposed methods are compatible with current two types of mainstream models, the CNN-based and GCN-based models. Finally, on the two benchmarks of NTU-RGB+D and SBU Kinect Interaction, we validated our methods based on two mainstream models. The results show that our methods achieve the state-of-the-art.
机译:基于骨架的动作识别近年来受到越来越多的关注。此外,深度学习的飞速发展极大地提高了性能。但是,当前对动作共现的探索还不够全面。大多数现有作品仅从时域或空间域中单独挖掘同现特征,通常最后将它们组合在一起。与以前的作品不同,我们的方法能够整体和全局地学习时间和空间共现特征,这被称为时空单位特征增强(STUFE)。为了更好地对齐骨架数据,我们介绍了一种用于骨架数据预处理的新方法,称为主动坐标系转换(ACSC)。可以自动学习坐标系以转换骨架样本以进行对齐。顺便说一下,所提出的方法与当前的两种主流模型兼容,即基于CNN和基于GCN的模型。最后,在NTU-RGB + D和SBU Kinect交互作用的两个基准上,我们基于两个主流模型验证了我们的方法。结果表明,我们的方法达到了最先进的水平。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号