首页> 外文会议>International Congress on Image and Signal Processing >Stacked denoising autoencoder and dropout together to prevent overfitting in deep neural network
【24h】

Stacked denoising autoencoder and dropout together to prevent overfitting in deep neural network

机译:堆积的去噪自动化器和丢弃在一起,以防止深度神经网络过度

获取原文

摘要

Deep neural network has very strong nonlinear mapping capability, and with the increasing of the numbers of its layers and units of a given layer, it would has more powerful representation ability. However, it may cause very serious overfitting problem and slow down the training and testing procedure. Dropout is a simple and efficient way to prevent overfitting. We combine stacked denoising autoencoder and dropout together, then it has achieved better performance than singular dropout method, and has reduced time complexity during fine-tune phase. We pre-train the data with stacked denoising autoencoder, and to prevent units from co-adapting too much dropout is applied in the period of training. At test time, it approximates the effect of averaging the predictions of many networks by using a network architecture that shares the weights. We show the performance of this method on a common benchmark dataset MNIST.
机译:深神经网络具有非常强烈的非线性映射能力,并且随着其层数的增加和给定层的单元,它将具有更强大的表示能力。但是,它可能会导致非常严重的过度问题并减慢培训和测试程序。辍学是一种防止过度装备的简单有效的方法。我们将堆叠的去噪AutoEncoder和丢弃在一起,然后它已经比单次丢失方法更好的性能,并且在微调相期间具有降低的时间复杂性。我们用堆叠的去噪AutoEncoder预先训练数据,并在训练期间采用太多辍学的单位。在测试时间,它近似于使用共享权重的网络架构来估计对许多网络的预测的效果。我们在公共基准数据集Mnist上显示了此方法的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号