首页> 外文期刊>Soft Computing - A Fusion of Foundations, Methodologies and Applications >Covariance matrix self-adaptation evolution strategies and other metaheuristic techniques for neural adaptive learning
【24h】

Covariance matrix self-adaptation evolution strategies and other metaheuristic techniques for neural adaptive learning

机译:神经自适应学习的协方差矩阵自适应进化策略和其他元启发式技术

获取原文
获取原文并翻译 | 示例

摘要

A covariance matrix self-adaptation evolution strategy (CMSA-ES) was compared with several metaheuristic techniques for multilayer perceptron (MLP)-based function approximation and classification. Function approximation was based on simulations of several 2D functions and classification analysis was based on nine cancer DNA microarray data sets. Connection weight learning by MLPs was carried out using genetic algorithms (GA–MLP), covariance matrix self-adaptation-evolution strategies (CMSA-ES–MLP), back-propagation gradient-based learning (MLP), particle swarm optimization (PSO–MLP), and ant colony optimization (ACO–MLP). During function approximation runs, input-side activation functions evaluated included linear, logistic, tanh, Hermite, Laguerre, exponential, and radial basis functions, while the output-side function was always linear. For classification, the input-side activation function was always logistic, while the output-side function was always regularized softmax. Self-organizing maps and unsupervised neural gas were used to reduce dimensions of original gene expression input features used in classification. Results indicate that for function approximation, use of Hermite polynomials for activation functions at hidden nodes with CMSA-ES–MLP connection weight learning resulted in the greatest fitness levels. On average, the most elite chromosomes were observed for MLP (MSE=0.4977{rm MSE}=0.4977), CMSA-ES–MLP (0.6484), PSO–MLP (0.7472), ACO–MLP (1.3471), and GA–MLP (1.4845). For classification analysis, overall average performance of classifiers used was 92.64% (CMSA-ES–MLP), 92.22% (PSO–MLP), 91.30% (ACO–MLP), 89.36% (MLP), and 60.72% (GA–MLP). We have shown that a reliable approach to function approximation can be achieved through application of MLP connection weight learning when the assumed function is unknown. In this scenario, the MLP architecture itself defines the equation used for solving the unknown parameters relating input and output target values. A major drawback of implementing CMSA-ES into an MLP is that when the number of MLP weights is large, the O(N3){{mathcal{O}}}(N^3) Cholesky factorization becomes a bottleneck for performance. As an alternative, feature reduction using SOM and NG can greatly enhance performance of CMSA-ES–MLP by reducing N.N. Future research into the speeding up of Cholesky factorization for CMSA-ES will be helpful in overcoming time complexity problems related to a large number of connection weights.
机译:将协方差矩阵自适应进化策略(CMSA-ES)与基于多层感知器(MLP)的函数逼近和分类的几种元启发式技术进行了比较。功能近似基于几个2D功能的模拟,分类分析基于9个癌症DNA微阵列数据集。使用遗传算法(GA–MLP),协方差矩阵自适应演化策略(CMSA-ES–MLP),基于反向传播梯度的学习(MLP),粒子群优化(PSO– MLP)和蚁群优化(ACO–MLP)。在函数逼近过程中,评估的输入端激活函数包括线性,逻辑,tanh,Hermite,Laguerre,指数和径向基函数,而输出端函数始终是线性的。对于分类,输入侧激活函数始终为逻辑函数,而输出侧函数始终为正规化softmax。自组织图和无监督的神经气体用于减少分类中使用的原始基因表达输入特征的维数。结果表明,对于函数逼近,通过使用CMSA-ES-MLP连接权重学习在隐藏节点上使用Hermite多项式来激活函数,可以得到最大的适应度。平均而言,观察到最优秀的染色体为MLP(MSE = 0.4977 {rm MSE} = 0.4977),CMSA-ES–MLP(0.6484),PSO–MLP(0.7472),ACO–MLP(1.3471)和GA–MLP (1.4845)。对于分类分析,所使用分类器的总体平均性能为92.64%(CMSA-ES–MLP),92.22%(PSO–MLP),91.30%(ACO–MLP),89.36%(MLP)和60.72%(GA–MLP) )。我们已经表明,当假定函数未知时,可以通过应用MLP连接权重学习来实现可靠的函数逼近方法。在这种情况下,MLP体系结构本身定义了用于求解与输入和输出目标值相关的未知参数的方程式。将CMSA-ES实施到MLP中的主要缺点是,当MLP权数较大时,O(N 3 ){{mathcal {O}}}(N ^ 3)Cholesky因式分解成为性能的瓶颈。作为替代方案,使用SOM和NG进行特征缩减可以通过减少N.N来大大提高CMSA-ES-MLP的性能。对于CMSA-ES的Cholesky分解加速的未来研究将有助于克服与​​大量连接权重有关的时间复杂性问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号