首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition >Towards a Mathematical Understanding of the Difficulty in Learning with Feedforward Neural Networks
【24h】

Towards a Mathematical Understanding of the Difficulty in Learning with Feedforward Neural Networks

机译:对前馈神经网络学习困难的数学理解

获取原文

摘要

Training deep neural networks for solving machine learning problems is one great challenge in the field, mainly due to its associated optimisation problem being highly non-convex. Recent developments have suggested that many training algorithms do not suffer from undesired local minima under certain scenario, and consequently led to great efforts in pursuing mathematical explanations for such observations. This work provides an alternative mathematical understanding of the challenge from a smooth optimisation perspective. By assuming exact learning of finite samples, sufficient conditions are identified via a critical point analysis to ensure any local minimum to be globally minimal as well. Furthermore, a state of the art algorithm, known as the Generalised Gauss-Newton (GGN) algorithm, is rigorously revisited as an approximate Newton's algorithm, which shares the property of being locally quadratically convergent to a global minimum under the condition of exact learning.
机译:训练深度神经网络以解决机器学习问题是该领域的一大挑战,这主要是因为其相关的优化问题是高度不凸的。最近的发展表明,许多训练算法在某些情况下不会遭受不希望的局部最小值的困扰,因此导致人们在为此类观察寻求数学解释方面付出了巨大的努力。这项工作从平稳的优化角度提供了对挑战的另一种数学理解。通过假设对有限样本的精确学习,可以通过临界点分析确定足够的条件,以确保任何局部最小值也可以全局最小值。此外,作为一种近似的牛顿算法,它被严格地重新研究了一种称为通用高斯-牛顿(GGN)算法的先进算法,该算法具有在精确学习的条件下局部二次收敛到全局最小值的特性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号