首页> 外文会议>International conference on neural information processing >Deep Dictionary Learning vs Deep Belief Network vs Stacked Autoencoder: An Empirical Analysis
【24h】

Deep Dictionary Learning vs Deep Belief Network vs Stacked Autoencoder: An Empirical Analysis

机译:深度词典学习vs深度信念网络vs堆叠式自动编码器:实证分析

获取原文

摘要

A recent work introduced the concept of deep dictionary learning. The first level is a dictionary learning stage where the inputs are the training data and the outputs are the dictionary and learned coefficients. In subsequent levels of deep dictionary learning, the learned coefficients from the previous level acts as inputs. This is an unsupervised representation learning technique. In this work we empirically compare and contrast with similar deep representation learning techniques - deep belief network and stacked autoencoder. We delve into two aspects; the first one is the robustness of the learning tool in the presence of noise and the second one is the robustness with respect to variations in the number of training samples. The experiments have been carried out on several benchmark datasets. We find that the deep dictionary learning method is the most robust.
机译:最近的工作介绍了深度字典学习的概念。第一级是字典学习阶段,其中输入是训练数据,输出是字典和学习系数。在随后的深度词典学习级别中,从先前级别中学习到的系数充当输入。这是一种无监督的表示学习技术。在这项工作中,我们在经验上与类似的深度表示学习技术(深度置信网络和堆叠式自动编码器)进行比较和对比。我们研究两个方面:第一个是在存在噪声的情况下学习工具的鲁棒性,第二个是在训练样本数量变化方面的鲁棒性。实验已在几个基准数据集上进行。我们发现深度字典学习方法是最可靠的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号