首页> 外文期刊>Applied Soft Computing >Stacked autoencoder based deep random vector functional link neural network for classification
【24h】

Stacked autoencoder based deep random vector functional link neural network for classification

机译:基于堆叠的AutoEncoder的深随机矢量功能链接神经网络进行分类

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Extreme learning machine (ELM), which can be viewed as a variant of Random Vector Functional Link (RVFL) network without the input-output direct connections, has been extensively used to create multi-layer (deep) neural networks. Such networks employ randomization based autoencoders (AE) for unsupervised feature extraction followed by an ELM classifier for final decision making. Each randomization based AE acts as an independent feature extractor and a deep network is obtained by stacking several such AEs. Inspired by the better performance of RVFL over ELM, in this paper, we propose several deep RVFL variants by utilizing the framework of stacked autoencoders. Specifically, we introduce direct connections (feature reuse) from preceding layers to the fore layers of the network as in the original RVFL network. Such connections help to regularize the randomization and also reduce the model complexity. Furthermore, we also introduce denoising criterion, recovering clean inputs from their corrupted versions, in the autoencoders to achieve better higher level representations than the ordinary autoencoders. Extensive experiments on several classification datasets show that our proposed deep networks achieve overall better and faster generalization than the other relevant state-of-the-art deep neural networks. (C) 2019 Elsevier B.V. All rights reserved.
机译:极端学习机(ELM),可以被视为没有输入输出直接连接的随机向量功能链路(RVFL)网络的变体,已被广泛地广泛地用于创建多层(深)神经网络。这种网络采用基于随机化的AutoEncoders(AE)用于无监督的特征提取,然后是用于最终决策的ELM分类器。基于随机化的AE充当独立的特征提取器,通过堆叠几个这样的AES获得深度网络。在本文中,通过RVFL对榆树的性能更好的灵感,我们通过利用堆叠的自动化器的框架提出了几种深的RVFL变体。具体地,我们将前面的层直接连接(特征重用)引入到原始RVFL网络中的网络前面层。这种连接有助于对随机化进行规范化,并降低模型复杂性。此外,我们还引入了去噪标准,从其损坏的版本中恢复了清洁输入,在AutoEncoders中,以实现比普通的AutoEncoders更高的级别表示。关于多个分类数据集的广泛实验表明,我们所提出的深网络比其他最新的深度神经网络实现更好,更快的泛化。 (c)2019年Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号