The experimental results in this paper demonstrate that a simple pruning/retraining method effectively improves the generalization performance of recurrent neural networks trained to recognize regular languages. The technique also permits the extraction of symbolic knowledge in the form of deterministic finite-state automata (DFA) which are more consistent with the rules to be learned. Weight decay has also been shown to improve a network's generalization performance. Simulations with two small DFA (/spl les/10 states) and a large finite-memory machine (64 states) demonstrate that the performance improvement due to pruning/retraining is generally superior to the improvement due to training with weight decay. In addition, there is no need to guess a 'good' decay rate.
展开▼
机译:本文的实验结果表明,简单的修剪/再培训方法有效提高了经过训练的经常性神经网络的泛化性能,以识别常规语言。该技术还允许以确定性有限状态自动机(DFA)的形式提取符号知识,其与要学习的规则更加符合。还显示重量衰减,以提高网络的泛化性能。使用两个小型DFA(/ SPL LES / 10状态)和大型有限记忆机(64个态)的模拟表明,由于修剪/再培训引起的性能改善通常优于具有重量衰减的训练引起的改进。此外,无需猜出“良好”衰减率。
展开▼