【24h】

Context-Free Transductions with Neural Stacks

机译:神经堆栈的无上下文转换

获取原文
获取原文并翻译 | 示例

摘要

This paper analyzes the behavior of stack-augmented recurrent neural network (RNN) models. Due to the architectural similarity between stack RNNs and pushdown transducers, we train stack RNN models on a number of tasks, including string reversal, context-free language modelling, and cumulative XOR evaluation. Examining the behavior of our networks, we show that stack-augmented RNNs can discover intuitive stack-based strategies for solving our tasks. However, stack RNNs are more difficult to train than classical architectures such as LSTMs. Rather than employ stack-based strategies, more complex networks often find approximate solutions by using the stack as unstructured memory.
机译:本文分析了堆栈增强递归神经网络(RNN)模型的行为。由于堆栈RNN与下推式传感器之间的架构相似性,我们在许多任务上训练堆栈RNN模型,包括字符串反转,无上下文语言建模和累积XOR评估。检查我们的网络行为,我们发现增强堆栈的RNN可以发现用于解决任务的基于堆栈的直观策略。但是,堆栈RNN比LSTM等传统体系结构更难训练。并非采用基于堆栈的策略,更复杂的网络通常通过将堆栈用作非结构化内存来找到近似的解决方案。

著录项

  • 来源
  • 会议地点 Brussels(BE)
  • 作者单位

    Department of Linguistics, Yale University Department of Computer Science, Yale University;

    Department of Linguistics, Yale University Department of Computer Science, Yale University;

    Department of Linguistics, Yale University Department of Computer Science, Yale University;

    Department of Linguistics, Yale University Department of Computer Science, Yale University;

    Department of Linguistics, Yale University Department of Computer Science, Yale University;

    Department of Linguistics, Yale University Department of Computer Science, Yale University;

    Department of Linguistics, Yale University Department of Computer Science, Yale University;

  • 会议组织
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号