首页> 外文期刊>Journal of Zhejiang university science >A regeneratable dynamic differential evolution algorithm for neural networks with integer weights
【24h】

A regeneratable dynamic differential evolution algorithm for neural networks with integer weights

机译:具有整数权重的神经网络的可再生动态微分进化算法

获取原文
           

摘要

neural networks with integer weights are more suited for embedded systems and hardware implementations than those with real weights. However, many learning algorithms, which have been proposed for training neural networks with float weights, are inefficient and difficult to train for neural networks with integer weights. In this paper, a novel regeneratable dynamic differential evolution algorithm (RDDE) is presented. This algorithm is efficient for training networks with integer weights. In comparison with the conventional differential evolution algorithm (DE), RDDE has introduced three new strategies: (1) A regeneratable strategy is introduced to ensure further evolution, when all the individuals are the same after several iterations such that they cannot evolve further. In other words, there is an escape from the local minima. (2) A dynamic strategy is designed to speed up convergence and simplify the algorithm by updating its population dynamically. (3) A local greedy strategy is introduced to improve local searching ability when the population approaches the global optimal solution. In comparison with other gradient based algorithms, RDDE does not need the gradient information, which has been the main obstacle for training networks with integer weights. The experiment results show that RDDE can train integer-weight networks more efficiently.
机译:与具有实际权重的神经网络相比,具有整数权重的神经网络更适合嵌入式系统和硬件实现。然而,已经提出了许多用于训练具有浮动权重的神经网络的学习算法,其效率低并且难以训练具有整数权重的神经网络。本文提出了一种新的可再生动态差分进化算法(RDDE)。该算法对于训练具有整数权重的网络非常有效。与传统的差分进化算法(DE)相比,RDDE引入了三种新策略:(1)当所有个体经过多次迭代后都是相同的,使得它们无法进一步进化时,引入了可再生策略以确保进一步的进化。换句话说,逃避了局部最小值。 (2)设计了动态策略,以通过动态更新其种群来加快收敛速度​​并简化算法。 (3)引入局部贪婪策略,以在种群接近全局最优解时提高局部搜索能力。与其他基于梯度的算法相比,RDDE不需要梯度信息,这已成为训练具有整数权重的网络的主要障碍。实验结果表明,RDDE可以更有效地训练整数加权网络。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号