【24h】

Fast optimization of PNN based on center neighbor and KLT

机译:基于中心邻居和KLT的PNN快速优化

获取原文

摘要

Probabilistic Neural Networks (PNN) learn quickly from examples in one pass and asymptotically achieve the Bayes-optimal decision boundaries. The major disadvantage of PNN is that it requires one node or neuron for each training sample. Various clustering techniques have been proposed to reduce this requirement to one node per cluster center. A new fast optimization of PNN is investigated here using iteratively computing the centers of each class samples unrecognized and add their nearest neighbors to pattern layer. For fast constructing the classification model, weight and incremental technique is introduced to improve the learning speed. To further decrease the structure of PNN, KL transform is adopted to compress feature dimension. The approach proposed here decreases redundancy not only in samples using nearest neighbor but also in features using KL transformation. Experiments on UCI show the appropriate tradeoff in training time and generalization ability.
机译:概率神经网络(PNN)可以在一遍中快速地从示例中学习,并渐近地实现贝叶斯最优决策边界。 PNN的主要缺点是,每个训练样本都需要一个节点或神经元。已经提出了各种聚类技术以将该要求减少到每个聚类中心一个节点。本文通过迭代计算无法识别的每个类别样本的中心并将其最近的邻居添加到图案层来研究PNN的一种新的快速优化方法。为了快速构建分类模型,引入了权重和增量技术来提高学习速度。为了进一步减少PNN的结构,采用KL变换压缩特征维。此处提出的方法不仅减少了使用最近邻居的样本中的冗余,而且还减少了使用KL变换的特征中的冗余。在UCI上进行的实验表明,在训练时间和泛化能力上需要进行适当的权衡。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号