首页> 外文会议>International Conference on Statistical Language and Speech Processing >A Regularization Post Layer: An Additional Way How to Make Deep Neural Networks Robust
【24h】

A Regularization Post Layer: An Additional Way How to Make Deep Neural Networks Robust

机译:正则化帖子层:如何使深度神经网络的额外方法强大

获取原文

摘要

Neural Networks (NNs) are prone to overfitting. Especially, the Deep Neural Networks in the cases where the training data are not abundant. There are several techniques which allow us to prevent the overfitting, e.g., L1/L2 regularization, unsupervised pre-training, early training stopping, dropout, bootstrapping or cross-validation models aggregation. In this paper, we proposed a regularization post-layer that may be combined with prior techniques, and it brings additional robustness to the NN. We trained the regularization post-layer in the cross-validation (CV) aggregation scenario: we used the CV held-out folds to train an additional neural network post-layer that boosts the network robustness. We have tested various post-layer topologies and compared results with other regularization techniques. As a benchmark task, we have selected the TIMIT phone recognition which is a well-known and still favorite task where the training data are limited, and the used regularization techniques play a key role. However, the regularization post-layer is a general method, and it may be employed in any classification task.
机译:神经网络(NNS)容易发生过度装备。特别是,在训练数据不充分的情况下,深度神经网络。有几种技术使我们能够防止过度装备,例如L1 / L2正规化,无监督的预训练,早期训练停止,辍学,引导或交叉验证模型聚合。在本文中,我们提出了一个可以与现有技术结合的正则化介质,并且它为NN带来了额外的鲁棒性。我们在交叉验证(CV)聚合方案中培训了正则化后层:我们使用了CV Hold-Out折叠来培训额外的神经网络后层,可以提高网络鲁棒性。我们已经测试了各种后拓扑,并使用其他正则化技术进行了比较结果。作为一个基准任务,我们选择了训练数据有限的众所周知和静止最喜欢的任务的Timit电话识别,并且使用的正则化技术发挥着关键作用。但是,正则化后层是一般方法,并且可以在任何分类任务中使用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号