首页> 外文期刊>Neural computation >Representational Power of Restricted Boltzmann Machines and Deep Belief Networks
【24h】

Representational Power of Restricted Boltzmann Machines and Deep Belief Networks

机译:受限玻尔兹曼机和深信度网络的表示能力

获取原文
获取原文并翻译 | 示例

摘要

Deep belief networks (DBN) are generative neural network models with many layers of hidden explanatory factors, recently introduced by Hinton, Osindero, and Teh (2006) along with a greedy layer-wise unsuper-vised learning algorithm. The building block of a DBN is a probabilistic model called a restricted Boltzmann machine (RBM), used to represent one layer of the model. Restricted Boltzmann machines are interesting because inference is easy in them and because they have been successfully used as building blocks for training deeper models. We first prove that adding hidden units yields strictly improved modeling power, while a second theorem shows that RBMs are universal approximators of discrete distributions. We then study the question of whether DBNs with more layers are strictly more powerful in terms of representational power. This suggests a new and less greedy criterion for training RBMs within DBNs.
机译:深度信念网络(DBN)是具有多层隐藏解释因素的生成型神经网络模型,由Hinton,Osindero和Teh(2006)以及贪婪的逐层无监督学习算法最近引入。 DBN的构建块是称为受限玻尔兹曼机(RBM)的概率模型,用于表示模型的一层。受限的Boltzmann机器之所以有趣,是因为它们的推论很容易,并且因为它们已成功地用作训练更深层模型的基础。我们首先证明,添加隐藏单元会严格提高建模能力,而第二个定理表明,RBM是离散分布的通用逼近器。然后,我们研究在表示能力方面具有更多层的DBN是否严格更强大的问题。这为训练DBN中的RBM提供了一个新的且不太贪婪的准则。

著录项

  • 来源
    《Neural computation》 |2008年第6期|1631-1649|共19页
  • 作者

    Nicolas Le Roux; Yoshua Bengio;

  • 作者单位
  • 收录信息 美国《科学引文索引》(SCI);美国《化学文摘》(CA);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号