【24h】

Context-Free Transductions with Neural Stacks

机译:与神经堆栈的无背景转换

获取原文

摘要

This paper analyzes the behavior of stack-augmented recurrent neural network (RNN) models. Due to the architectural similarity between stack RNNs and pushdown transducers, we train stack RNN models on a number of tasks, including string reversal, context-free language modelling, and cumulative XOR evaluation. Examining the behavior of our networks, we show that stack-augmented RNNs can discover intuitive stack-based strategies for solving our tasks. However, stack RNNs are more difficult to train than classical architectures such as LSTMs. Rather than employ stack-based strategies, more complex networks often find approximate solutions by using the stack as unstructured memory.
机译:本文分析了堆栈增强经常性神经网络(RNN)模型的行为。由于堆栈RNN和推送传感器之间的架构相似性,我们在许多任务中培训堆栈RNN模型,包括字符串反转,无背景语言建模和累积XOR评估。检查我们的网络的行为,我们显示堆栈增强的RNN可以发现基于堆栈的基于堆栈的策略,以解决我们的任务。但是,堆栈RNN比诸如LSTM等古典架构更难以训练。通过使用堆栈作为非结构化存储器,更复杂的网络而不是基于基于堆栈的策略,通常会找到近似解决方案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号