首页> 外国专利> LEARNING LONGER-TERM DEPENDENCIES IN NEURAL NETWORK USING AUXILIARY LOSSES

LEARNING LONGER-TERM DEPENDENCIES IN NEURAL NETWORK USING AUXILIARY LOSSES

机译:利用辅助损失学习神经网络中的长期依赖

摘要

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for structuring and training a recurrent neural network. This describes a technique that improves the ability to capture long term dependencies in recurrent neural networks by adding an unsupervised auxiliary loss at one or more anchor points to the original objective. This auxiliary loss forces the network to either reconstruct previous events or predict next events in a sequence, making truncated backpropagation feasible for long sequences and also improving full backpropagation through time.
机译:用于构造和训练循环神经网络的方法,系统和装置,包括在计算机存储介质上编码的计算机程序。这说明了一种技术,该技术通过在原始目标的一个或多个锚点处添加无监督的辅助损失来提高捕获递归神经网络中长期依赖性的能力。这种辅助损耗迫使网络重构序列中的先前事件或预测序列中的下一个事件,从而使截短的反向传播适用于长序列,并且还可以改善整个时间的反向传播。

著录项

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号