首页> 外文会议>International Conference on Artificial Neural Networks >Scaling in a hierarchical unsupervised network
【24h】

Scaling in a hierarchical unsupervised network

机译:在分层无监督的网络中进行缩放

获取原文

摘要

A persistent worry with computational models of unsupervised learning is that learning will become more difficult as the problem is scaled. We examine this issue in the context of a novel hierarchical, generative model that can be viewed as anon-linear generalization of factor analysis and can be implemented in a neural network. The model performs perceptual inference in a probabilistically consistent manner by using top-down, bottom-up and lateral connections. These connections can belearned using simple rules that require only locally available information. We first demonstrate that the model can extract a sparse, distributed, hierarchical representation of global disparity from simplified random-dot stereograms. We then investigatesome of the scaling properties of the algorithm on this problem and find that: (1) Increasing the image size leads to faster and more reliable learning; (2) Increasing the depth of the network from one to two hidden layers leads to better representationsat the first hidden layer, and (3) Once one part of the network has discovered how to represent disparity, it "supervises" other parts of the network, greatly speeding up their learning.
机译:持续担心无监督学习的计算模型是,在缩放问题时,学习将变得更加困难。我们在新颖的分层,生成模型的上下文中检查该问题,该模型可以被视为因子分析的Anon-Linear概括,并且可以在神经网络中实现。该模型通过使用自上而下的自下而上和横向连接来以概率的方式执行感知推理。这些连接可以使用仅需要本地可用信息的简单规则来束缚。我们首先证明该模型可以从简化的随机点立体图提取全局视差的稀疏,分布式分层表示。然后,我们调查了算法的缩放属性在这个问题上,找到:(1)增加图像大小导致更快,更可靠的学习; (2)从一个到两个隐藏层增加网络的深度导致更好的表示第一个隐藏层,(3)一旦网络的一部分发现如何代表差异,它就会“监督”网络的其他部分,大大加快学习。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号