首页> 外文会议>Conference on substance identification analytics >Relabeling exchange method (REM) for learning in neural networks
【24h】

Relabeling exchange method (REM) for learning in neural networks

机译:在神经网络中依赖Exchange方法(REM)

获取原文

摘要

The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.
机译:对神经网络的监督培训需要使用通常随意分配的输出标签。在本文中,示出了在使用“最佳”标签分配方案时,RMS误差有显着差异。我们研究了两个有效的随机搜索算法来解决依赖标签问题:模拟退火和遗传算法。但是,我们发现它们要计算昂贵。因此,我们将介绍一种新的启发式算法,称为重新标记的Exchange方法(REM),其计算地更具吸引力,并产生最佳性能。 REM已被用来组织多层的感知和神经树网络的最佳结构。该方法是普遍的,可以实现为对标准训练算法的修改。新的抢购战略的动机是基于对诵读诵读作为编码问题的解释。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号