首页> 外文期刊>Neural processing letters >Optimizing Extreme Learning Machine via Generalized Hebbian Learning and Intrinsic Plasticity Learning
【24h】

Optimizing Extreme Learning Machine via Generalized Hebbian Learning and Intrinsic Plasticity Learning

机译:通过广义Hebbian学习和内在可塑性学习优化极限学习机

获取原文
获取原文并翻译 | 示例
           

摘要

Traditional extreme learning machine (ELM) has random weights between input layer and hidden layer, this kind of random feature mapping brings non-discriminative feature space and unstable classification accuracy, which greatly limits the performance of the ELM networks. Therefore, to get the well-pleasing input weights, two biologically inspired, unsupervised learning methods were introduced to optimize the traditional ELM networks, namely the generalized hebbian algorithm (GHA) and intrinsic plasticity learning (IPL). The GHA is able to extract the principal components of the input data of arbitrary size, while the IPL tunes the probability density of the neuron's output towards a desired distribution such as exponential distribution or weber distribution, thereby maximizing the networks information transmission. With the incorporation of the GHA and IPL approach, the optimized ELM networks generates a discriminative feature space and preserves much more characteristic of the input data, accordingly, achieving a better task performance. Based on the above two unsupervised methods, a simple, yet effective hierarchical feature mapping extreme learning machine (HFMELM) is further proposed. With almost no information loss in the layer-wise feature mapping process, the HFMELM is able to learn the high-level representation of the input data. To evaluate the effectiveness of the proposed methods, extensive experiments on several datasets are presented, the results show that the proposed methods significantly outperform the traditional ELM networks.
机译:传统的极限学习机(ELM)在输入层和隐藏层之间具有随机权重,这种随机特征映射带来了非歧视性的特征空间和不稳定的分类精度,极大地限制了ELM网络的性能。因此,为了获得令人满意的输入权重,引入了两种生物学启发的,无监督的学习方法来优化传统的ELM网络,即广义hebbian算法(GHA)和内在可塑性学习(IPL)。 GHA能够提取任意大小的输入数据的主要成分,而IPL则将神经元输出的概率密度朝着期望的分布(例如指数分布或Weber分布)进行调整,从而使网络信息传输最大化。通过合并GHA和IPL方法,优化的ELM网络生成了可区分的特征空间,并保留了输入数据的更多特征,从而实现了更好的任务性能。基于以上两种无监督方法,进一步提出了一种简单而有效的分级特征映射极限学习机。在分层功能映射过程中几乎没有信息丢失,HFMELM能够学习输入数据的高级表示。为了评估所提方法的有效性,对几个数据集进行了广泛的实验,结果表明所提方法明显优于传统的ELM网络。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号