首页> 外文期刊>Neurocomputing >A new deep neural network based on a stack of single-hidden-layer feedforward neural networks with randomly fixed hidden neurons
【24h】

A new deep neural network based on a stack of single-hidden-layer feedforward neural networks with randomly fixed hidden neurons

机译:一种新的深度神经网络,该网络基于具有随机固定的隐藏神经元的单层前馈神经网络的堆栈

获取原文
获取原文并翻译 | 示例

摘要

Single-hidden layer feedforward neural networks with randomly fixed hidden neurons (RHN-SLFNs) have been shown, both theoretically and experimentally, to be fast and accurate. Besides, it is well known that deep architectures can find higher-level representations, thus can potentially capture relevant higher-level abstractions. But most of current deep learning methods require a long time to solve a non-convex optimization problem. In this paper, we propose a stacked deep neural network, St-URHN-SLFNs, via unsupervised RHN-SLFNs according to stacked generalization philosophy to deal with unsupervised problems. Empirical study on a wide range of data sets demonstrates that the proposed algorithm outperforms the state-of-the-art unsupervised algorithms in terms of accuracy. On the computational effectiveness, the proposed algorithm runs much faster than other deep learning methods, i.e. deep autoencoder (DA) and stacked autoencoder (SAE), and little slower than other methods. (C) 2015 Elsevier B.V. All rights reserved.
机译:从理论上和实验上都显示了具有随机固定的隐藏神经元(RHN-SLFNs)的单隐藏前馈神经网络是快速而准确的。此外,众所周知,深层架构可以找到更高级别的表示,因此可以潜在地捕获相关的更高级别的抽象。但是,当前大多数深度学习方法都需要很长时间才能解决非凸优化问题。在本文中,我们根据堆叠泛化原理,通过无监督的RHN-SLFN提出了一个堆叠的深度神经网络St-URHN-SLFN,以解决无监督的问题。对各种数据集的经验研究表明,该算法在准确性方面优于最新的无监督算法。在计算效率方面,提出的算法比其他深度学习方法(即深度自动编码器(DA)和堆叠式自动编码器(SAE))运行速度快得多,并且比其他方法慢一些。 (C)2015 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号