首页> 外文期刊>Cybernetics and Systems Analysis >STOCHASTIC GENERALIZED GRADIENT METHODS FOR TRAINING NONCONVEX NONSMOOTH NEURAL NETWORKS
【24h】

STOCHASTIC GENERALIZED GRADIENT METHODS FOR TRAINING NONCONVEX NONSMOOTH NEURAL NETWORKS

机译:用于训练非凸起非光华神经网络的随机广义梯度方法

获取原文
获取原文并翻译 | 示例
       

摘要

The paper observes a similarity between the stochastic optimal control of discrete dynamical systems and the learning multilayer neural networks. It focuses on contemporary deep networks with nonconvex nonsmooth loss and activation functions. The machine learning problems are treated as nonconvex nonsmooth stochastic optimization problems. As a model of nonsmooth nonconvex dependences, the so-called generalized-differentiable functions are used. The backpropagation method for calculating stochastic generalized gradients of the learning quality functional for such systems is substantiated basing on Hamilton-Pontryagin formalism. Stochastic generalized gradient learning algorithms are extended for training nonconvex nonsmooth neural networks. The performance of a stochastic generalized gradient algorithm is illustrated by the linear multiclass classification problem.
机译:本文观察到离散动力系统的随机最佳控制与学习多层神经网络之间的相似性。 它侧重于当代深度网络,具有非凸起的非透明损失和激活功能。 机器学习问题被视为非凸起的非运动随机优化问题。 作为非光滑非耦合依赖的模型,使用所谓的广义可分辨性功能。 用于计算这种系统的学习质量功能的随机广义梯度的背部化方法是基于Hamilton-Pontryagin形式主义的基础。 随机广义梯度学习算法延长了训练非挖掘非现状神经网络。 随机广义梯度算法的性能由线性多字符分类问题说明。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号