首页> 外文期刊>Global Journal of Technology and Optimization >Influence of Principal Component Analysis as a Data Conditioning Approach for Training Multilayer Feedforward Neural Networks with Exact Form of Levenberg-Marquardt Algorithm
【24h】

Influence of Principal Component Analysis as a Data Conditioning Approach for Training Multilayer Feedforward Neural Networks with Exact Form of Levenberg-Marquardt Algorithm

机译:主要成分分析作为雷诺伯格算法精确形式训练多层前馈神经网络的数据调节方法

获取原文
           

摘要

Artificial Neural Networks (ANNs) have generally been observed to learn with a relatively higher rate of convergence resulting in an improved training performance if the input variables are preprocessed before being used to train the network. The? foremost objectives? of? data? preprocessing? include? size? reduction? of? the? input? space,? smoother? relationship,? data? normalization,? noise? reduction,? and? feature? extraction. The most commonly used technique for input space reduction is Principal Component Analysis (PCA) while two of the most commonly used data normalization approaches include the min-max normalization or rescaling, and the z-score normalization also known as standardization. However, the selection of the most appropriate preprocessing method for a given dataset is not a trivial task especially if the dataset contains an unusually large number of training patterns. This study presents a first attempt of combining PCA with each of the two aforementioned normalization approaches for analyzing the network performance based on the Levenberg-Marquardt (LM) training algorithm utilizing exact formulations of both the gradient vector and the Hessian matrix. The network weights have been initialized using a linear least squares method. The training procedure has been conducted for each of the proposed modifications of the LM algorithm for four different types of datasets and the training performance in terms of the average convergence rate and a proposed performance metric has been compared with the Neural Network Toolbox in MATLAB? (R2017a).
机译:通常观察到人工神经网络(ANNS)以较高的收敛速度来学习,导致如果在用于训练网络之前预处理输入变量,则导致提高的训练性能。这?最重要的目标?的?数据?预处理?包括?尺寸?减少?的?这?输入?空间,?更顺畅?关系,?数据?正常化,?噪音?减少,?和?特征?萃取。用于输入空间减少的最常用技术是主要成分分析(PCA),而其中两个最常用的数据归一化方法包括最小归一化或重构,并且Z-Score归一化也称为标准化。但是,选择给定数据集的最合适的预处理方法不是琐碎的任务,特别是如果数据集包含异常大量的训练模式。本研究提出了一种将PCA与两个上述归一化方法中的每一个组合的首次尝试,用于基于Levenberg-Marquardt(LM)训练算法利用梯度向量和Hessian矩阵的精确配方来分析网络性能。使用线性最小二乘法初始化网络权重。已经为四种不同类型数据集的LM算法的每个建议修改进行了训练程序,并且在平均收敛速率方面的训练性能和拟议的性能度量与Matlab中的神经网络工具箱进行了比较? (R2017A)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号