首页> 外文OA文献 >Modular neural networks applied to pattern recognition tasks
【2h】

Modular neural networks applied to pattern recognition tasks

机译:模块化神经网络应用于模式识别任务

摘要

Pattern recognition has become an accessible tool in developing advanced adaptive products. The need for such products is not diminishing but on the contrary, requirements for systems that are more and more aware of their environmental circumstances are constantly growing. Feed-forward neural networks are used to learn patterns in their training data without the need to discover by hand the relationships present in the data. However, the problem of estimating the required size of the neural network is still not solved. If we choose a neural network that is too small for a particular given task, the network is unable to "comprehend" the intricacies of the data. On the other hand if we choose a network size that is too big for the given task, we will observe that there are too many parameters to be tuned for the network, or we can fall in the "Curse of dimensionality" or even worse, the training algorithm can easily be trapped in local minima of the error surface. Therefore, we choose to investigate possible ways to find the 'Goldilocks' size for a feed-forward neural network (which is just right in some sense), being given a training set. Furthermore, we used a common paradigm used by the Roman Empire and employed on a wide scale in computer programming, which is the "Divide-et-Impera" approach, to divide a given dataset in multiple sub-datasets, solve the problem for each of the sub-dataset and fuse the results of all the sub-problems to form the result for the initial problem as a whole. To this effect we investigated modular neural networks and their performance.
机译:模式识别已成为开发高级自适应产品的便捷工具。对此类产品的需求并没有减少,相反,对于越来越意识到其环境状况的系统的需求正在不断增长。前馈神经网络用于学习其训练数据中的模式,而无需手动发现数据中存在的关系。但是,估计神经网络的所需大小的问题仍然没有解决。如果我们选择的神经网络对于特定的给定任务而言太小,则该网络将无法“理解”数据的复杂性。另一方面,如果我们选择的网络大小对于给定的任务而言太大,则会发现有太多参数无法为该网络调整,否则我们会陷入“维数诅咒”,甚至更糟,训练算法很容易陷入错误表面的局部最小值。因此,我们选择调查给定训练集的方法来找到前馈神经网络(在某种意义上是正确的)的“ Goldilocks”大小的可能方法。此外,我们使用了罗马帝国使用并在计算机编程中广泛采用的通用范例,即“ Divide-et-Impera”方法,将给定的数据集划分为多个子数据集,为每个子集解决问题子数据集的结果,并融合所有子问题的结果以形成整个初始问题的结果。为此,我们研究了模块化神经网络及其性能。

著录项

  • 作者

    Gherman Bogdan George;

  • 作者单位
  • 年度 2016
  • 总页数
  • 原文格式 PDF
  • 正文语种 en
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号