首页> 外文会议>IEEE workshop on neural networks for signal processing >Pruning recurrent neural networks for improved generalization performance
【24h】

Pruning recurrent neural networks for improved generalization performance

机译:修剪经常性神经网络,提高泛化性能

获取原文

摘要

The experimental results in this paper demonstrate that a simple pruning/retraining method effectively improves the generalization performance of recurrent neural networks trained to recognize regular languages. The technique also permits the extraction of symbolic knowledge in the form of deterministic finite-state automata (DFA) which are more consistent with the rules to be learned. Weight decay has also been shown to improve a network's generalization performance. Simulations with two small DFA (/spl les/10 states) and a large finite-memory machine (64 states) demonstrate that the performance improvement due to pruning/retraining is generally superior to the improvement due to training with weight decay. In addition, there is no need to guess a 'good' decay rate.
机译:本文的实验结果表明,简单的修剪/再培训方法有效提高了经过训练的经常性神经网络的泛化性能,以识别常规语言。该技术还允许以确定性有限状态自动机(DFA)的形式提取符号知识,其与要学习的规则更加符合。还显示重量衰减,以提高网络的泛化性能。使用两个小型DFA(/ SPL LES / 10状态)和大型有限记忆机(64个态)的模拟表明,由于修剪/再培训引起的性能改善通常优于具有重量衰减的训练引起的改进。此外,无需猜出“良好”衰减率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号