首页> 外文会议>Annual conference on Neural Information Processing Systems >Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity
【24h】

Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity

机译:深入了解神经网络:初始化的力量和表达能力的双重观点

获取原文

摘要

We develop a general duality between neural networks and compositional kernel Hilbert spaces. We introduce the notion of a computation skeleton, an acyclic graph that succinctly describes both a family of neural networks and a kernel space. Random neural networks are generated from a skeleton through node replication followed by sampling from a normal distribution to assign weights. The kernel space consists of functions that arise by compositions, averaging, and non-linear transformations governed by the skeleton's graph topology and activation functions. We prove that random networks induce representations which approximate the kernel space. In particular, it follows that random weight initialization often yields a favorable starting point for optimization despite the worst-case intractability of training neural networks.
机译:我们在神经网络和组成核希尔伯特空间之间发展出一般对偶性。我们介绍了计算框架的概念,这是一个简洁地描述神经网络家族和内核空间的无环图。通过节点复制从骨骼生成随机神经网络,然后从正态分布进行采样以分配权重。内核空间由函数组成,这些函数由骨架的图拓扑和激活函数控制的合成,平均和非线性变换产生。我们证明了随机网络会诱导近似内核空间的表示。尤其是,尽管训练神经网络的最坏情况难以解决,但随机权重初始化通常会为优化提供一个有利的起点。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号