首页> 外文会议>Annual conference on Neural Information Processing Systems >Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity
【24h】

Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity

机译:深入了解神经网络:初始化的力量和对表达的双重观点

获取原文

摘要

We develop a general duality between neural networks and compositional kernel Hilbert spaces. We introduce the notion of a computation skeleton, an acyclic graph that succinctly describes both a family of neural networks and a kernel space. Random neural networks are generated from a skeleton through node replication followed by sampling from a normal distribution to assign weights. The kernel space consists of functions that arise by compositions, averaging, and non-linear transformations governed by the skeleton's graph topology and activation functions. We prove that random networks induce representations which approximate the kernel space. In particular, it follows that random weight initialization often yields a favorable starting point for optimization despite the worst-case intractability of training neural networks.
机译:我们在神经网络和组成内核希尔伯特空间之间发展了一般的二元性。我们介绍了计算骨架的概念,这是一个无缝的图形,即简洁地描述了一个神经网络系列和内核空间。随后通过节点复制从骨架生成随机神经网络,然后从正常分布采样以分配权重。内核空间由由骨架的图形拓扑和激活功能所致的组合,平均和非线性变换而产生的功能。我们证明随机网络诱导近似内核空间的表示。特别地,尽管训练神经网络的最坏情况难易性,但随机重量初始化通常会产生有利的起始点。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号