首页> 外文期刊>American Journal of Computational Mathematics >Completeness Problem of the Deep Neural Networks
【24h】

Completeness Problem of the Deep Neural Networks

机译:深度神经网络的完整性问题

获取原文
           

摘要

Hornik, Stinchcombe & White have shown that the multilayer feed forward networks with enough hidden layers are universal approximators. Roux & Bengio have proved that adding hidden units yield a strictly improved modeling power, and Restricted Boltzmann Machines (RBM) are universal approximators of discrete distributions. In this paper, we provide yet another proof. The advantage of this new proof is that it will lead to several new learning algorithms. We prove that the Deep Neural Networks implement an expansion and the expansion is complete. First, we briefly review the basic Boltzmann Machine and that the invariant distributions of the Boltzmann Machine generate Markov chains. We then review the θ -transformation and its completeness,? i . e. any function can be expanded by θ -transformation. We further review ABM (Attrasoft Boltzmann Machine). The invariant distribution of the ABM is a θ -transformation; therefore, an ABM can simulate any distribution. We discuss how to convert an ABM into a Deep Neural Network. Finally, by establishing the equivalence between an ABM and the Deep Neural Network, we prove that the Deep Neural Network is complete.
机译:Hornik,Stinchcombe&White已显示具有足够隐藏层的多层前馈网络是通用逼近器。 Roux&Bengio已证明添加隐藏单元会严格提高建模能力,而受限玻尔兹曼机(RBM)是离散分布的通用近似器。在本文中,我们提供了另一个证明。这种新证明的优点是它将导致几种新的学习算法。我们证明了深度神经网络实现了扩展并且扩展已经完成。首先,我们简要回顾一下基本的Boltzmann机器,以及Boltzmann机器的不变分布会产生马尔可夫链。然后,我们回顾θ-变换及其完整性,? 我。 e。 可以通过 θ-变换来扩展任何函数。我们将进一步回顾ABM(Attrasoft玻尔兹曼机)。 ABM的不变分布是 因此,ABM可以模拟任何分布。我们讨论了如何将ABM转换为深度神经网络。最后,通过建立ABM与深层神经网络之间的对等关系,我们证明深层神经网络是完整的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号