首页> 外文期刊>Applied Soft Computing >Human action recognition using two-stream attention based LSTM networks
【24h】

Human action recognition using two-stream attention based LSTM networks

机译:人类行动识别基于两流关注的LSTM网络

获取原文
获取原文并翻译 | 示例
           

摘要

It is well known that different frames play different roles in feature learning in video based human action recognition task. However, most existing deep learning models put the same weights on different visual and temporal cues in the parameter training stage, which severely affects the feature distinction determination. To address this problem, this paper utilizes the visual attention mechanism and proposes an end-to-end two-stream attention based LSTM network. It can selectively focus on the effective features for the original input images and pay different levels of attentions to the outputs of each deep feature maps. Moreover, considering the correlation between two deep feature streams, a deep feature correlation layer is proposed to adjust the deep learning network parameter based on the correlation judgement. In the end, we evaluate our approach on three different datasets, and the experiments results show that our proposal can achieve the state-of-the-art performance in the common scenarios. (C) 2019 Elsevier B.V. All rights reserved.
机译:众所周知,不同帧在基于视频的人体行动识别任务中的特征学习中发挥了不同的角色。然而,大多数现有的深度学习模型在参数训练阶段的不同视觉和时间线索上放置了相同的权重,严重影响特征区分确定。为了解决这个问题,本文利用了视觉关注机制,并提出了基于LSTM网络的端到端两流关注。它可以选择性地关注原始输入图像的有效功能,并为每个深度特征映射的输出支付不同程度的关节。此外,考虑到两个深度特征流之间的相关性,建议基于相关判断来调整深度学习网络参数的深度特征相关层。最后,我们在三个不同的数据集中评估我们的方法,实验结果表明,我们的提案可以实现常见情景中的最先进的性能。 (c)2019年Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号