首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >Energy-Guided Temporal Segmentation Network for Multimodal Human Action Recognition
【2h】

Energy-Guided Temporal Segmentation Network for Multimodal Human Action Recognition

机译:用于多式联运人体行动识别的能量导向时间分割网络

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。
获取外文期刊封面目录资料

摘要

To achieve the satisfactory performance of human action recognition, a central task is to address the sub-action sharing problem, especially in similar action classes. Nevertheless, most existing convolutional neural network (CNN)-based action recognition algorithms uniformly divide video into frames and then randomly select the frames as inputs, ignoring the distinct characteristics among different frames. In recent years, depth videos have been increasingly used for action recognition, but most methods merely focus on the spatial information of the different actions without utilizing temporal information. In order to address these issues, a novel energy-guided temporal segmentation method is proposed here, and a multimodal fusion strategy is employed with the proposed segmentation method to construct an energy-guided temporal segmentation network (EGTSN). Specifically, the EGTSN had two parts: energy-guided video segmentation and a multimodal fusion heterogeneous CNN. The proposed solution was evaluated on a public large-scale NTU RGB+D dataset. Comparisons with state-of-the-art methods demonstrate the effectiveness of the proposed network.
机译:为了实现人体行动认可的令人满意,中央任务是解决子行动分享问题,特别是在类似的行动类中。然而,大多数现有的卷积神经网络(CNN)的动作识别算法均匀地将视频划分为帧,然后随机选择帧作为输入,忽略不同帧之间的不同特性。近年来,深度视频越来越多地用于行动识别,但大多数方法仅关注不同动作的空间信息而不利用时间信息。为了解决这些问题,这里提出了一种新的能量引导时间分割方法,并且使用多模式融合策略与所提出的分割方法用于构建能量引导的时间分割网络(EGTSN)。具体地,EGTSN具有两部分:能量引导的视频分段和多模式融合异构CNN。所提出的解决方案在公共大规模NTU RGB + D数据集上进行评估。最先进的方法的比较展示了所提出的网络的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号