首页> 外文会议>International Conference on Machine Learning and Computing >Analysis on the Convergence of Dyadic Wavelet Based Neural Network with Varying Learning Rate and Resolution for Function Learning
【24h】

Analysis on the Convergence of Dyadic Wavelet Based Neural Network with Varying Learning Rate and Resolution for Function Learning

机译:不同学习率的二元小波神经网络收敛性分析与功能学习分辨率

获取原文

摘要

This paper presents the analysis of results on the generalisation of dyadic based wavelet neural network which are trained with uniform distribution from input space. The focus is to mainly quantify the significance of learning rate and the resolution so as to ensure an acceptable generalization accuracy for function learning simulations. The proposed network is based on orthonormal basis functions and trained with stochastic gradient algrothim. The simulations of developed dyadic wavelet based architecture and its learning algorithm justifies the effectiveness of the scaling function characteristics. Experimental results reveal that training and tuning the various simulation parameters of the network and its properties has greater influence on the generalization and convergence ability of the Dyadic Wavelet Neural Network (DWNN).
机译:本文介绍了对基于小波神经网络的泛化的结果分析,这些小波神经网络具有均匀分布从输入空间训练的培训。 重点是主要量化学习率和分辨率的重要性,以确保函数学习模拟的可接受的泛化精度。 该网络基于正式基础函数,并用随机梯度AlgroThim培训。 发达的二元小波基体系结构的模拟及其学习算法证明了缩放功能特性的有效性。 实验结果表明,训练和调整网络的各种仿真参数及其性质对二元小波神经网络(DWNN)的泛化和收敛能力产生了更大的影响。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号