首页> 外文会议>International Joint Conference on Neural Networks >Computational Analysis of Learned Representations in Deep Neural Network Classifiers
【24h】

Computational Analysis of Learned Representations in Deep Neural Network Classifiers

机译:深度神经网络分类器中学习表示的计算分析

获取原文
获取外文期刊封面目录资料

摘要

When a neural network is trained for a specific task, activations of the hidden units encode internal representations of the inputs. Models formulated in a layer-wise fashion are believed to structure such representations in a hierarchical fashion, increasing in complexity and abstractness towards the output layer, in an analogy to both biological neural networks and artificially constructed computational models. This paper examines how the structure of classification tasks manifests itself in these internal representations, using a variety of ad hoc metrics. The results, based on feedforward neural networks trained on moderately complex datasets MNIST and SVHN, confirm our hypothesis that the hidden neurons become more correlated with class information towards the output layer, providing some evidence for an increasing bottom-up organization in representations. While various activation functions lead to noticeably different internal representations as measured by each of the methods, the differences in overall classification accuracy remain minute. This confirms the intuition that there exist qualitatively different solutions to the complex classification problem imposed by nonlinearities in the hidden layers.
机译:当为特定任务训练神经网络时,隐藏单元的激活将对输入的内部表示进行编码。相信以分层方式构造的模型以类似于生物神经网络和人工构造的计算模型的分层方式构造这种表示,从而增加了对输出层的复杂性和抽象性。本文探讨了分类任务的结构如何使用各种临时指标在这些内部表示形式中表现出来。基于在中等复杂的数据集MNIST和SVHN上训练的前馈神经网络的结果,证实了我们的假设,即隐藏的神经元与类信息在输出层上的相关性越来越高,这为表示形式中自下而上组织的增加提供了一些证据。尽管各种激活功能导致通过每种方法测量的内部表示形式明显不同,但总体分类精度的差异仍然很小。这证实了直觉,即对于隐藏层中的非线性所造成的复杂分类问题,在质量上存在不同的解决方案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号