...
首页> 外文期刊>Knowledge-Based Systems >Enhancing performance of restricted Boltzmann machines via log-sum regularization
【24h】

Enhancing performance of restricted Boltzmann machines via log-sum regularization

机译:通过对数和正则化来提高受限Boltzmann机器的性能

获取原文
获取原文并翻译 | 示例

摘要

Restricted Boltzmann machines (RBMs) are often used as building blocks to construct a deep belief network. By optimizing several RBMs, the deep networks can be trained quickly to achieve good performance on the tasks of interest. To further improve the performance of data representation, many researches focus on incorporating sparsity into RBMs. In this paper, we propose a novel sparse RBM model, referred to as LogSumRBM. Instead of constraining the expected activation of every hidden unit to the same low level of sparsity as done in [27], we explicitly encourage the hidden units to be sparse through adding a log-sum norm constraint on the totality of the hidden units' activation probabilities. In this approach, we do not need to keep the "firing rate" of each hidden unit at a certain level that is set beforehand, and therefore the level of sparsity corresponding to each hidden unit can be automatically learnt based on the task at hand. Some experiments conducted on several image data sets of different scales show that LogSumRBM learns sparser and more discriminative representations compared with the related state-of-the-art models, and stacking two LogSumRBMs learns more significant features which mimic computations in the cortical hierarchy. Meanwhile, LogSumRBM can also be used to pre-train deep networks, and achieve better classification performance.
机译:受限玻尔兹曼机器(RBM)通常用作构建深度信任网络的基础。通过优化几个RBM,可以快速训练深层网络,以在感兴趣的任务上实现良好的性能。为了进一步提高数据表示的性能,许多研究集中在将稀疏性纳入RBM中。在本文中,我们提出了一种新颖的稀疏RBM模型,称为LogSumRBM。我们没有像[27]中那样将每个隐藏单元的预期激活限制在相同的低稀疏性级别,而是通过对隐藏单元的激活总数添加对数和范数约束来明确鼓励隐藏单元稀疏。概率。在这种方法中,我们不需要将每个隐藏单元的“开火率”保持在预先设置的特定水平,因此可以根据手头的任务自动学习与每个隐藏单元相对应的稀疏度。在几个不同比例的图像数据集上进行的一些实验表明,与相关的最新模型相比,LogSumRBM学习了稀疏且更具区分性的表示形式,并且堆叠两个LogSumRBM学习了更重要的功能,这些功能可以模仿皮质层次结构中的计算。同时,LogSumRBM还可以用于预训练深度网络,并获得更好的分类性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号