首页> 外文期刊>ACM transactions on multimedia computing communications and applications >Rethinking the Combined and Individual Orders of Derivative of States for Differential Recurrent Neural Networks: Deep Differential Recurrent Neural Networks
【24h】

Rethinking the Combined and Individual Orders of Derivative of States for Differential Recurrent Neural Networks: Deep Differential Recurrent Neural Networks

机译:重新思考各国衍生物的衍生物的衍生物的衍生物:深度差分经常性神经网络

获取原文
获取原文并翻译 | 示例
       

摘要

Due to their special gating schemes, Long Short-Term Memory (LSTM) has shown greater potential to pro-cess complex sequential information than the traditional Recurrent Neural Network (RNN). The conventional LSTM, however, fails to take into consideration the impact of salient spatio-temporal dynamics present in the sequential input data. This problem was first addressed by the differential Recurrent Neural Network (dRNN), which uses a differential gating scheme known as Derivative of States (DoS). DoS uses higher orders of inter-nal state derivatives to analyze the change in information gain originated from the salient motions between the successive frames. The weighted combination of several orders of DoS is then used to modulate the gates in dRNN. While each individual order of DoS is good at modeling a certain level of salient spatio-temporal sequences, the sum of all the orders of DoS could distort the detected motion patterns. To address this prob-lem, we propose to control the LSTM gates via individual orders of DoS. To fully utilize the different orders of DoS, we further propose to stack multiple levels of LSTM cells in an increasing order of state derivatives. The proposed model progressively builds up the ability of the LSTM gates to detect salient dynamical pat-terns in deeper stacked layers modeling higher orders of DoS; thus, the proposed LSTM model is termed deep differential Recurrent Neural Network (d~2RNN). The effectiveness of the proposed model is demonstrated on three publicly available human activity datasets: NUS-HGA, Violent-Flows, and UCF101. The proposed model outperforms both LSTM and non-LSTM based state-of-the-art algorithms.
机译:由于它们的特殊门控计划,长短短期存储器(LSTM)显示出比传统的经常性神经网络(RNN)更大的Pro-Cess复杂顺序信息的潜力。然而,传统的LSTM未能考虑在顺序输入数据中存在的突出的时空动力学的影响。差分经常性神经网络(DRNN)首先解决了该问题,其使用称为状态(DOS)的差分门控方案。 DOS使用更高的NAL状态衍生物来分析来自连续框架之间的突出运动的信息增益的变化。然后使用几种DOS的加权组合来调节DRNN中的栅极。虽然每个单独的DOS顺序都擅长建模一定程度的突出的时空序列,但DOS所有订单的总和可以扭曲检测到的运动模式。为了解决这个问题,我们建议通过单独的DOS控制LSTM盖茨。为了充分利用不同的DOS顺序,我们进一步提出以阶段衍生物的增加顺序堆叠多个水平的LSTM细胞。所提出的模型逐渐积聚LSTM门在更深的堆叠层中检测突出的动态Patterns,该层更高的DOS较高的DOS;因此,所提出的LSTM模型被称为深差分经常性神经网络(D〜2rnn)。拟议模型的有效性在三个公开的人类活动数据集上证明:Nus-HGA,剧目和UCF101。所提出的模型优于基于LSTM和非LSTM的最新算法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号