首页> 外文会议> >The TACOMA learning architecture for reflective growing of neural networks
【24h】

The TACOMA learning architecture for reflective growing of neural networks

机译:TACOMA学习架构,用于神经网络的反射式增长

获取原文

摘要

One of the important problems to be solved for neural network applications is to find a suitable network structure solving the given task. To reduce the engineering efforts for the architecture design a data driven algorithm is desirable which constructs a network structure during the learning process. There are different approaches for structure adaptation with evolutionary algorithms, growth algorithms and others. To solve large problems successfully it is necessary to divide the problem into subproblems and to solve them separately by experts. This is a fundamental principle of nature. To implement this principle in artificial neural networks there are different approaches, but these algorithms yield fixed network structures. The authors propose a learning architecture for growing complex artificial neural networks which tries to include both sides of the coin, structure adaptation and task decomposition. The growing process is controlled by self-observation or reflexion. The algorithm generates a feedforward network bottom up by cyclically inserting cascaded hidden layers. Inputs of a hidden layer unit are locally restricted with respect to the input space by using a new kind of activation function, combining the local characteristics of radial basis function units with sigmoid units. Contrary to the cascade-correlation learning architecture the authors introduce different correlation measures to train the network units featuring different goals. The task decomposition between subnetworks is done by maximizing the anticorrelation between the hidden layer units output and a connection routing algorithm which only connects cooperative units of different layers. These features resemble the TACOMA (TAsk decomposition, COrrelation Measures and local Attention neurons) learning architecture. Self-observation is done by transforming the errors and the network structure to the input space. So it is possible to infer from errors to structure and reverse.
机译:用于神经网络应用的重要问题之一是找到解决特定任务的合适网络结构。为了减少架构设计的工程工作,期望数据驱动算法,其在学习过程中构建网络结构。使用进化算法,生长算法和其他有不同的结构适应方法。要成功解决大问题,有必要将问题划分为子问题,并通过专家单独解决这些问题。这是自然的基本原则。为了实现人工神经网络中的这一原理,存在不同的方法,但这些算法产生固定网络结构。作者提出了一种用于越来越复杂的人工神经网络的学习架构,该网络试图包括硬币的两侧,结构适应和任务分解。生长过程由自我观察或反射来控制。该算法通过循环插入级联隐藏层来生成馈电网络。通过使用新类型的激活功能,将隐藏层单元的输入局部地限制为输入空间,将径向基函数单元的局部特性与S形单元相结合。与级联相关学习架构相反,作者介绍了不同的相关措施,以培训具有不同目标的网络单元。子网之间的任务分解是通过最大化隐藏的层单元输出和连接路由算法之间的反向屏蔽来完成的,该连接路由算法仅连接不同层的协作单元。这些特征类似于塔科马(任务分解,相关措施和局部注意神经元)学习架构。通过将误差和网络结构转换为输入空间来完成自我观察。因此,可以从错误的结构推断出来。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号