首页> 外文会议>Annual IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems >Controlling the hidden layers' output to optimizing the training process in the Deep Neural Network algorithm
【24h】

Controlling the hidden layers' output to optimizing the training process in the Deep Neural Network algorithm

机译:控制隐藏层的输出,以优化深度神经网络算法中的培训过程

获取原文
获取外文期刊封面目录资料

摘要

Deep learning is one of the most recent development form of Artificial Neural Network (ANN) in machine learning. Deep Neural Network (DNN) algorithm is usually used in image and speech recognition applications. As the development of Artificial Neural Network, very possible there are so many hidden layers in Deep Neural Network. In DNN, the output of each node is a quadratic function of its inputs. The DNN training process is very difficult. In this paper, we try to optimizing the training process by slightly construct of the deep architecture and combines several existing algorithms. Output's error of each unit in the previous layer will be calculated. The weight of the unit with the smallest error will be maintained in the next iteration. This paper uses MNIST handwriting images as its data training and data test. After doing some tests, it can be concluded that the optimization by selecting any output in each hidden layer, the DNN training process will be faster approximately 8%.
机译:深度学习是机器学习中的人工神经网络(ANN)最新的发展形式之一。深神经网络(DNN)算法通常用于图像和语音识别应用。作为人工神经网络的发展,很可能在深神经网络中有很多隐藏层。在DNN中,每个节点的输出是其输入的二次函数。 DNN培训过程非常困难。在本文中,我们尝试通过略微构造深层架构来优化培训过程,并结合了几种现有算法。将计算上一层中的每个单元的输出错误。在下一次迭代中将维持具有最小错误的单元的权重。本文使用Mnist手写图像作为其数据培训和数据测试。在进行一些测试之后,可以得出结论,通过在每个隐藏层中选择任何输出来进行优化,DNN训练过程大约为8%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号