首页> 外文会议>IEEE International Conference on Image Processing >Pairwise Adjacency Matrix on Spatial Temporal Graph Convolution Network for Skeleton-Based Two-Person Interaction Recognition
【24h】

Pairwise Adjacency Matrix on Spatial Temporal Graph Convolution Network for Skeleton-Based Two-Person Interaction Recognition

机译:基于骨架的两人交互识别的空间时间图卷积网络上的成对邻接矩阵

获取原文

摘要

Spatial-temporal graph convolutional networks (ST-GCN) have achieved outstanding performances on human action recognition, however, it might be less superior on a two-person interaction recognition (TPIR) task due to the relationship of each skeleton is not considered. In this study, we present an improvement of the STGCN model that focused on TPIR by employing the pairwise adjacency matrix to capture the relationship of person-person skeletons (ST-GCN-PAM). To validate the effectiveness of the proposed ST-GCN-PAM model on TPIR, experiments were conducted on NTU RGB+D120. Additionally, the model was also examined on the Kinetics dataset and NTU RGB+D60. The results show that the proposed ST-GCN-PAM outperforms the-state-of-the-art methods on mutual action of NTU RGB+D120 by achieving 83.28% (cross-subject) and 88.31% (cross-view) accuracy. The model is also superior to the original ST-GCN on the multi-human action of the Kinetics dataset by achieving 41.68% in Top-l and 88.91% in Top-5.
机译:时空图卷积网络(ST-GCN)在人类动作识别方面取得了出色的表现,但是,由于没有考虑每个骨架的关系,它在两人互动识别(TPIR)任务上可能没有那么出色。在这项研究中,我们通过采用成对邻接矩阵来捕获人-人骨架(ST-GCN-PAM)的关系,提出了针对TPIR的STGCN模型的改进。为了验证所提出的ST-GCN-PAM模型在TPIR上的有效性,在NTU RGB + D120上进行了实验。此外,还在动力学数据集和NTU RGB + D60上检查了该模型。结果表明,所提出的ST-GCN-PAM在实现NTU RGB + D120交互作用方面优于最新方法,达到了83.28%(跨主题)和88.31%(跨视角)的准确性。该模型在Kinetics数据集的多人操作方面也优于原始的ST-GCN,在Top-1中达到41.68%,在Top-5中达到88.91%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号