首页> 外文会议>IEEE International Symposium on Circuits and Systems >Pipelined parallel contrastive divergence for continuous generative model learning
【24h】

Pipelined parallel contrastive divergence for continuous generative model learning

机译:流水线平行对比偏差,用于连续生成模型学习

获取原文

摘要

In this paper we propose a method for continuously processing and learning from data in Restricted Boltzmann Machines (RBMs). Traditionally, RBMs are trained using Contrastive Divergence (CD), which is an algorithm consisting of two phases, of which only one is driven by data. This not only prohibits training of RBMs in conjugation with continuous-time data streams, especially in event-based real-time systems, but also hinders training speed of RBMs in large-scale machine learning systems. The model we propose trades space for time and, by pipelining information propagation in the network, is capable of processing both phases of the CD learning algorithm simultaneously. Simulation results of our model on generative and discriminative tasks show convergence to the original CD algorithm. We finalize with a discussion of applying our method to other deep neural networks, resulting in continuous learning and training time reduction.
机译:在本文中,我们提出了一种从限制Boltzmann机器(RBMS)中的数据中连续处理和学习的方法。传统上,使用对比分歧(CD)培训RBMS,其是由两个阶段组成的算法,其中只有一个由数据驱动。这不仅禁止与连续时间数据流共轭训练RBM,特别是在基于事件的实时系统中,还可以在大型机器学习系统中阻碍RBM的训练速度。我们提出时间的模型,通过网络中的信息传播来交易空间,并且能够同时处理CD学习算法的两个阶段。我们对生成和鉴别任务模型的仿真结果显示了原始CD算法的收敛性。我们讨论将我们的方法应用于其他深度神经网络,导致持续学习和培训时间减少。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号