...
首页> 外文期刊>Journal of visual communication & image representation >VirtualActionNet: A strong two-stream point cloud sequence network for human action recognition
【24h】

VirtualActionNet: A strong two-stream point cloud sequence network for human action recognition

机译:VirtualActionNet:用于人体动作识别的强大双流点云序列网络

获取原文
获取原文并翻译 | 示例

摘要

In this paper, we propose a strong two-stream point cloud sequence network VirtualActionNet for 3D human action recognition. In the data preprocessing stage, we transform the depth sequence into a point cloud sequence as the input of our VirtualActionNet. In order to encode intra-frame appearance structures, static point cloud technologies are first employed as a virtual action generation sequence module to abstract the point cloud sequence into a virtual action sequence. Then, a two-stream network framework is presented to model the virtual action sequence. Specifically, we design an appearance stream module for aggregating all the appearance information preserved in each virtual action frame. Moreover, a motion stream module is introduced to capture dynamic changes along the time dimension. Finally, a joint loss strategy is adopted during data training to improve the action prediction accuracy of the two-stream network. Extensive experiments on three publicly available datasets demonstrate the effectiveness of the proposed VirtualActionNet.
机译:在本文中,我们提出了一种用于3D人体动作识别的强双流点云序列网络VirtualActionNet。在数据预处理阶段,我们将深度序列转换为点云序列,作为 VirtualActionNet 的输入。为了对帧内外观结构进行编码,首先采用静态点云技术作为虚拟动作生成序列模块,将点云序列抽象为虚拟动作序列。然后,提出了一个双流网络框架,用于对虚拟动作序列进行建模。具体来说,我们设计了一个外观流模块,用于聚合每个虚拟动作帧中保存的所有外观信息。此外,还引入了运动流模块来捕捉沿时间维度的动态变化。最后,在数据训练过程中采用联合损失策略,提高双流网络的动作预测精度。在三个公开数据集上进行的大量实验证明了所提出的VirtualActionNet的有效性。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号