...
首页> 外文期刊>Systems and Computers in Japan >Introduction of Linear Constraints on the Weight Representation of Multilayer Networks for Generalization and Application to Character Recognition
【24h】

Introduction of Linear Constraints on the Weight Representation of Multilayer Networks for Generalization and Application to Character Recognition

机译:线性约束对多层网络权重表示的推广及在字符识别中的应用

获取原文
获取原文并翻译 | 示例

摘要

In the application of layered neural networks to practical problems, a high generalization power is required. This paper discusses a method of improving the generalization power of neural networks. The knowledge of the object to be learned is assumed to include the fact that the output function of the object to be learned remains invariant for a certain range of input pattern variation. An attempt is made to improve the generalization power by reflecting this invariant property in the weighting of the neural network. It is shown for the case in which the variation of the input pattern can be represented by a linear transformation that a sufficient condition for the neural network to have such invariance is that the linear dependency constraint is introduced into the weight expression. A learning process is proposed in which this kind of constraint for the weight expression is introduced into the evaluation function as an additional term. The proposed method can be considered as a generalization of the method in which deletion learning methods such as the weight decay method and the structural learning method are included as special cases. There has been discussion of the relation between the generalization power and the VC dimension. The improvement of the proposed method can be evaluated, by introducing a linear constraint into the weight expression, based on the reduction of the VC dimension. Lastly, results are presented for an experiment in which the proposed method is applied to the character recognition problem.
机译:在将分层神经网络应用于实际问题时,需要很高的泛化能力。本文讨论了一种提高神经网络泛化能力的方法。假设要学习的对象的知识包括以下事实:对于一定范围的输入模式变化,要学习的对象的输出函数保持不变。试图通过在神经网络的加权中反映这种不变性质来提高泛化能力。对于可以通过线性变换来表示输入模式的变化的情况表示,神经网络具有这种不变性的充分条件是将线性依赖性约束引入到权重表达式中。提出了一种学习过程,其中将这种权重表达式的约束作为附加项引入评估函数中。所提出的方法可以被认为是方法的概括,其中包括删除学习方法(例如权重衰减方法和结构学习方法)作为特殊情况。已经讨论了泛化能力和VC维之间的关系。通过将线性约束引入权重表达式中,可以基于VC维数的减少来评估所提出方法的改进。最后,给出了将所提出的方法应用于字符识别问题的实验结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号