...
首页> 外文期刊>Journal of visual communication & image representation >Relative view based holistic-separate representations for two-person interaction recognition using multiple graph convolutional networks
【24h】

Relative view based holistic-separate representations for two-person interaction recognition using multiple graph convolutional networks

机译:基于相对视图的全部分开表示,使用多个图形卷积网络的双人交互识别

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

In this paper, we focus on recognizing person-person interactions using skeletal data captured from depth sensors. First, we propose a novel and efficient view transformation scheme. The skeletal interaction sequence is re-observed under a new coordinate system, which is invariant to various setups and capturing views of depth cameras as well as the position or facing orientation exchange between two persons. Second, we propose concise and discriminative interaction representations simply composed of the joint locations from two persons. Proposed representations are efficient to describe both the holistic interactive scene and individual poses performed by each subject separately. Third, we introduce the graph convolutional networks(GCN) to directly learn proposed skeletal interaction representations. Moreover, we design a multiple GCN-based model to provide the final class score. Extensive experimental results on three skeletal action datasets NTU RGB+D 60, NTU RGB+D 120 and SBU consistently demonstrate the superiority of our interaction recognition method. (C) 2020 Elsevier Inc. All rights reserved.
机译:在本文中,我们专注于使用从深度传感器捕获的骨架数据来识别人人的交互。首先,我们提出了一种新颖有效的观看转型方案。在新的坐标系下重新观察骨骼相互作用序列,该坐标系是不变的深度摄像机的各种设置和捕获视图以及两个人之间的位置或面对方向交换。其次,我们提出简明和歧视的互动言论,简单地由来自两个人的联合地点组成。提出的表示是有效地描述每个受试者执行的整体交互式场景和单独的姿势。第三,我们介绍了图形卷积网络(GCN),直接学习提出的骨架交互表示。此外,我们设计了一种基于GCN的模型来提供最终的阶级分数。在三个骨骼动作数据集NTU RGB + D 60上进行了广泛的实验结果,NTU RGB + D 120和SBU一致地证明了我们的相互作用识别方法的优越性。 (c)2020 Elsevier Inc.保留所有权利。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号