...
首页> 外文期刊>Journal of electronic imaging >Trajectory-aware three-stream CNN for video action recognition
【24h】

Trajectory-aware three-stream CNN for video action recognition

机译:轨迹感知三流CNN用于视频动作识别

获取原文
获取原文并翻译 | 示例
           

摘要

Video-based human action recognition is a challenging task in computer vision. In recent years, the convolution neural network (CNN) and its extended versions have shown promising results for video action recognition. However, most of the existing methods cannot deal with the global motion information effectively, especially for long-term motion which is crucial to represent complex none-periodic actions. To address this issue, a stacked trajectory energy image (STEI) is proposed by extracting trajectories from motion saliency regions and stacked them onto one grayscale image. This will result in an STEI with discriminative texture feature which can effectively characterize the global motion from multiple consecutive frames. Then, a three-stream CNN framework is proposed to simultaneously capture spatial, temporal, and global motion information of the action from RGB frames, optical flow, and STEI. Moreover, a trajectory-aware convolution strategy is introduced by incorporating local and long-term motion information so as to learn the motion features directly and effectively from three complementary action-related regions. Finally, the learned features are aggregated and categorized by a linear support vector machine. The experimental results on two challenging datasets (i.e., HMDB51 and UCF101) demonstrate that our approach statistically outperforms a number of state-of-the-art methods. (C) 2018 SPIE and IS&T
机译:基于视频的人体动作识别是计算机视觉中一项具有挑战性的任务。近年来,卷积神经网络(CNN)及其扩展版本在视频动作识别方面显示出令人鼓舞的结果。然而,大多数现有方法不能有效地处理全局运动信息,尤其是对于代表复杂的非周期性动作至关重要的长期运动。为了解决这个问题,提出了通过从运动显着性区域提取轨迹并将它们堆叠到一个灰度图像上来提出堆叠轨迹能量图像(STEI)。这将导致具有判别纹理特征的STEI可以有效地表征多个连续帧中的全局运动。然后,提出了一种三流CNN框架,以同时从RGB帧,光流和STEI中捕获动作的空间,时间和全局运动信息。此外,通过结合局部和长期运动信息,引入了轨迹感知的卷积策略,以便从三个互补的动作相关区域中直接有效地学习运动特征。最后,通过线性支持向量机对学习到的特征进行汇总和分类。在两个具有挑战性的数据集(即HMDB51和UCF101)上的实验结果表明,我们的方法在统计上优于许多最新方法。 (C)2018 SPIE和IS&T

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号