首页> 外文期刊>Mathematical Problems in Engineering: Theory, Methods and Applications >The Multidimensional Motion Features of Spatial Depth Feature Maps: An Effective Motion Information Representation Method for Video-Based Action Recognition
【24h】

The Multidimensional Motion Features of Spatial Depth Feature Maps: An Effective Motion Information Representation Method for Video-Based Action Recognition

机译:空间深度特征映射的多维运动特征:基于视频动作识别的有效运动信息表示方法

获取原文
       

摘要

In video action recognition based on deep learning, the design of the neural network is focused on how to acquire effective spatial information and motion information quickly. This paper proposes a kind of deep network that can obtain both spatial information and motion information in video classification. It is called MDFs (the multidimensional motion features of deep feature map net). This method can be used to obtain spatial information and motion information in videos only by importing image frame data into a neural network. MDFs originate from the definition of 3D convolution. Multiple 3D convolution kernels with different information focuses are used to act on depth feature maps so as to obtain effective motion information at both spatial and temporal. On the other hand, we split the 3D convolution at space dimension and time dimension, and the spatial network feature map has reduced the dimensions of the original frame image data, which realizes the mitigation of computing resources of the multichannel grouped 3D convolutional network. In order to realize the region weight differentiation of spatial features, a spatial feature weighted pooling layer based on the spatial-temporal motion information guide is introduced to realize the attention to high recognition information. By means of multilevel LSTM, we realize the fusion between global semantic information acquisition and depth features at different levels so that the fully connected layers with rich classification information can provide frame attention mechanism for the spatial information layer. MDFs need only to act on RGB images. Through experiments on three universal experimental datasets of action recognition, UCF10, UCF11, and HMDB51, it is concluded that the MDF network can achieve an accuracy comparable to two streams (RGB and optical flow) that requires the import of both frame data and optical flow data in video classification tasks.
机译:在基于深度学习的视频动作识别中,神经网络的设计专注于如何快速获取有效的空间信息和运动信息。本文提出了一种深度网络,可以在视频分类中获得空间信息和运动信息。它被称为MDFS(深度特征地图网的多维运动功能)。该方法可用于仅通过将图像帧数据导入神经网络来获得视频中的空间信息和运动信息。 MDFS源自3D卷积的定义。具有不同信息的多个3D卷积内核用于对深度特征映射起作用,以便在空间和时间内获得有效的运动信息。另一方面,我们在空间尺寸和时间尺寸下分开3D卷积,并且空间网络特征图已经减少了原始帧图像数据的尺寸,这实现了多通道分组的3D卷积网络的计算资源的减轻。为了实现空间特征的区域重量分化,引入了基于空间运动信息指南的空间特征加权汇总层,以实现对高识别信息的关注。通过多级LSTM,我们在不同级别实现全局语义信息获取和深度特征之间的融合,使得具有丰富分类信息的完全连接的层可以为空间信息层提供框架注意机制。 MDF只需要在RGB图像上采取行动。通过对动作识别的三个通用实验数据集,UCF10,UCF11和HMDB51的实验,得出结论,MDF网络可以实现与两个流(RGB和光流)相当的准确性,这需要导入帧数据和光流量视频分类任务中的数据。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号