首页> 外文会议>International Joint Conference on Neural Networks >Hierarchical Extreme Learning Machine for unsupervised representation learning
【24h】

Hierarchical Extreme Learning Machine for unsupervised representation learning

机译:用于无监督表示学习的分层极限学习机

获取原文
获取外文期刊封面目录资料

摘要

Learning representations from massive unlabeled data is a hot topic for high-level tasks in many applications. The recent great improvements on benchmark data sets, which are achieved by increasingly complex unsupervised learning methods and deep learning models with lots of parameters, usually require many tedious tricks and much expertise to tune. However, filters learned by these complex architectures are quite similar to standard hand-crafted features visually, and training the deep models costs quite long time to fine-tune their weights. In this paper, Extreme Learning Machine-Autoencoder (ELM-AE) is employed as the learning unit to learn local receptive fields at each layer, and the lower layer responses are transferred to the last layer (trans-layer) to form a more complete representation to retain more information. In addition, some beneficial methods in deep learning architectures such as local contrast normalization and whitening are added to the proposed hierarchical Extreme Learning Machine networks to further boost the performance. The obtained trans-layer representations are followed by block histograms with binary hashing to learn translation and rotation invariant representations, which are utilized to do high-level tasks such as recognition and detection. Compared to traditional deep learning methods, the proposed trans-layer representation method with ELM-AE based learning of local receptive filters has much faster learning speed and is validated in several typical experiments, such as digit recognition on MNIST and MNIST variations, object recognition on Caltech 101. State-of-the-art performances are achieved on both Caltech 101 15 samples per class task and 4 of 6 MNIST variations data sets, and highly impressive results are obtained on MNIST data set and other tasks.
机译:从大量未标记的数据中学习表示形式是许多应用程序中高级任务的热门话题。通过越来越复杂的无监督学习方法和具有很多参数的深度学习模型,可以对基准数据集进行最近的重大改进,通常需要许多乏味的技巧和大量的专业知识来进行调整。但是,这些复杂的体系结构学到的滤波器在视觉上与标准的手工制作功能非常相似,并且训练深层模型需要花费很长时间来微调它们的权重。在本文中,以极限学习机器自动编码器(ELM-AE)作为学习单元来学习每一层的局部接受场,并将较低层的响应转移到最后一层(跨层)以形成更完整的层。保留更多信息的代表性。此外,在深度学习体系结构中,一些有益的方法(如局部对比度归一化和白化)被添加到建议的分层极限学习机网络中,以进一步提高性能。所获得的跨层表示形式之后是带有二进制散列的块直方图,以学习平移和旋转不变表示形式,这些表示形式用于执行高级任务,例如识别和检测。与传统的深度学习方法相比,所提出的基于ELM-AE的局部接收滤波器学习的跨层表示方法具有更快的学习速度,并且已在多个典型实验中得到验证,例如MNIST上的数字识别和MNIST变体,目标上的物体识别。加州理工学院101。在加州理工学院101上,每类任务15个样本和6个MNIST变化数据集中的4个都达到了最先进的性能,并且在MNIST数据集和其他任务上获得了令人印象深刻的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号