...
首页> 外文期刊>Neural computation >Architectural Bias in Recurrent Neural Networks: Fractal Analysis
【24h】

Architectural Bias in Recurrent Neural Networks: Fractal Analysis

机译:递归神经网络中的建筑偏见:分形分析

获取原文
获取原文并翻译 | 示例
           

摘要

We have recently shown that when initialized with "small" weights, recurrent neural networks (RNNs) with standard sigmoid-type activation functions are inherently biased toward Markov models; even prior to any training, RNN dynamics can be readily used to extract finite memory machines (Hammer & Tino, 2002; Tino, Cernansky, & Benuskova, 2002a, 2002b). Following Christiansen and Chater (1999), we refer to this phenomenon as the architectural bias of RNNs. In this article, we extend our work on the architectural bias in RNNs by performing a rigorous fractal analysis of recurrent activation patterns. We assume the network is driven by sequences obtained by traversing an underlying finite-state transition diagram―a scenario that has been frequently considered in the past, for example, when studying RNN-based learning and implementation of regular grammars and finite-state transducers. We obtain lower and upper bounds on various types of fractal dimensions, such as box counting and Hausdorff dimensions. It turns out that not only can the recurrent activations inside RNNs with small initial weights be explored to build Markovian predictive models, but also the activations form fractal clusters, the dimension of which can be bounded by the scaled entropy of the underlying driving source. The scaling factors are fixed and are given by the RNN parameters.
机译:最近我们发现,当使用“小”权重进行初始化时,具有标准S型激活函数的递归神经网络(RNN)固有地偏向于马尔可夫模型。即使在进行任何培训之前,RNN动力学也可以很容易地用于提取有限存储机器(Hammer&Tino,2002; Tino,Cernansky和&Benuskova,2002a,2002b)。继克里斯蒂安森和查特(1999)之后,我们将这种现象称为RNN的体系结构偏差。在本文中,我们通过对循环激活模式进行严格的分形分析,扩展了RNN的体系结构偏差的工作。我们假设网络是由遍历底层有限状态转换图获得的序列驱动的,例如,在研究基于RNN的学习以及常规语法和有限状态转换器的实现时,这种情况在过去经常被考虑。我们获得各种类型的分形维数的上下限,例如盒数和Hausdorff维数。事实证明,不仅可以探索初始权重较小的RNN内部的递归激活以建立马尔可夫预测模型,而且还可以形成分形簇,这些分形簇的维数可以由基础驱动源的比例熵限制。比例因子是固定的,并由RNN参数给出。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号