...
首页> 外文期刊>Philosophical Transactions of the Royal Society of London, Series B. Biological Sciences >Training neural networks to encode symbols enables combinatorial generalization
【24h】

Training neural networks to encode symbols enables combinatorial generalization

机译:培训神经网络以编码符号使组合概括能够实现

获取原文
获取原文并翻译 | 示例

摘要

Combinatorial generalization-the ability to understand and produce novel combinations of already familiar elements-is considered to be a core capacity of the human mind and a major challenge to neural network models. A significant body of research suggests that conventional neural networks cannot solve this problem unless they are endowed with mechanisms specifically engineered for the purpose of representing symbols. In this paper, we introduce a novel way of representing symbolic structures in connectionist terms-the vectors approach to representing symbols (VARS), which allows training standard neural architectures to encode symbolic knowledge explicitly at their output layers. In two simulations, we show that neural networks not only can learn to produce VARS representations, but in doing so they achieve combinatorial generalization in their symbolic and non-symbolic output. This adds to other recent work that has shown improved combinatorial generalization under some training conditions, and raises the question of whether specific mechanisms or training routines are needed to support symbolic processing.
机译:组合泛化——理解和产生已经熟悉的元素的新组合的能力被认为是人类思维的核心能力,也是神经网络模型的主要挑战。大量研究表明,传统的神经网络无法解决这个问题,除非它们被赋予了专门设计用于表示符号的机制。在本文中,我们介绍了一种用连接主义术语表示符号结构的新方法——向量表示符号法(VAR),它允许训练标准的神经结构在输出层显式地编码符号知识。在两个仿真中,我们证明了神经网络不仅可以学习产生VARS表示,而且通过这样做,它们在符号和非符号输出中实现了组合泛化。这增加了其他最近的工作,这些工作表明,在某些训练条件下,组合泛化得到了改进,并提出了一个问题,即是否需要特定的机制或训练例程来支持符号处理。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号