首页> 外文会议>International Conference on Artificial Intelligence and Pattern Recognition >On the Performance Analysis of Resource Allocating Network Trained with Significant Patterns
【24h】

On the Performance Analysis of Resource Allocating Network Trained with Significant Patterns

机译:论重大模式训练资源分配网络的性能分析

获取原文

摘要

Radial Basis Function (RBF) neural networks have been receiving wider attention among the neural network research community after the Back Propagation (BP) trained Multilayer Perceptron (MLP) Feed Forward Neural Networks. The main reasons are their simple structure and ease of training. Several variants of RBF learning algorithms are available for solving classification, function approximation and regression problems. In any neural network system, proper identification of architecture and their parameter initialization assumes greater significance to accomplish better performance. There have been a large number of proposals on automatic development of architectures during learning to ease out the frustration caused by trial and error based schemes to fix up the so called optimal architectures. However their focus has been confined to one objective, ie to reduce the number of RBF units. Among the several variants of constructive learning networks, Platt's Resource Allocating Network (RAN) is still one of the popular incremental learning networks that has been tried on several problem types and still research is underway to fine tune it to maximize its performance. RAN has several limitations as it is sensitive to outliers and builds larger networks resulting in memorization consequently leading poor generalization. The best part of this algorithm is its fast convergence. Hence there is substantial scope to improve its performance. In this paper, we propose two new ideas to improve the performance of RAN. In the first instance, we calculate the medoids from samples in respective pattern classes. Secondly we pick a small percentage of significant patterns based on the farthest neighbor concept. The medoids and the significant patterns are collectively used to train the RAN. The entire samples in the input space are used to test the generalization performance of the network. Benchmark datasets have been used to experiment the proposed techniques. Results reveal significant enhancement in the performance of the RAN compared to traditional RAN.
机译:径向基函数(RBF)神经网络在后部传播(BP)训练经过训练的多层的Herceptron(MLP)馈送前神经网络之后,神经网络研究界的神经网络在神经网络研究界中得到了更广泛的注意。主要原因是它们的结构简单和易于培训。 RBF学习算法的几种变体可用于解决分类,函数近似和回归问题。在任何神经网络系统中,正确识别架构及其参数初始化假设实现更好的性能的重要性。在学习期间有大量关于建筑自动开发的建议,以便缓解由基于试验和错误的方案造成的令人沮丧,以修复所谓的最佳架构。然而,他们的焦点被限制在一个目标中,即减少RBF单位的数量。在建设性学习网络的几种变体中,Platt的资源分配网络(RAN)仍然是已经在几种问题类型上尝试的流行增量学习网络之一,并且正在进行微调它以最大化其性能。 RAN有几个限制,因为它对异常值敏感,并构建了更大的网络,从而导致记忆导致较差的概括。该算法的最佳部分是其快速收敛性。因此,有很大的范围可以提高其性能。在本文中,我们提出了两个新的想法来提高跑的表现。首先,我们计算各种模式类别中的样本中的麦细管。其次,我们基于最远的邻居概念选择了一小部分的重要模式。贝贝和显着的图案共同用于训练跑车。输入空间中的整个样本用于测试网络的泛化性能。基准数据集已被用于实验所提出的技术。结果显示,与传统跑车相比,冉冉效率的性能显着提升。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号