...
首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Inverting Supervised Representations with Autoregressive Neural Density Models
【24h】

Inverting Supervised Representations with Autoregressive Neural Density Models

机译:使用自回归神经密度模型反映监督的陈述

获取原文
           

摘要

We present a method for feature interpretation that makes use of recent advances in autoregressive density estimation models to invert model representations. We train generative inversion models to express a distribution over input features conditioned on intermediate model representations. Insights into the invariances learned by supervised models can be gained by viewing samples from these inversion models. In addition, we can use these inversion models to estimate the mutual information between a model’s inputs and its intermediate representations, thus quantifying the amount of information preserved by the network at different stages. Using this method we examine the types of information preserved at different layers of convolutional neural networks, and explore the invariances induced by different architectural choices. Finally we show that the mutual information between inputs and network layers initially increases and then decreases over the course of training, supporting recent work by Shwartz-Ziv and Tishby (2017) on the information bottleneck theory of deep learning.
机译:我们提出了一种特征解释方法,这些方法利用自回归密度估计模型的最新进步来反转模型表示。我们培训生成的反演模型,以表达在中间模型表示上的输入功能的分布。通过查看来自这些反转模型的样本,可以获得由监督模型学习的InRormces的见解。此外,我们可以使用这些反转模型来估计模型输入和中间表示之间的互信息,从而量化网络以不同阶段保留的信息量。使用此方法,我们检查在不同层次的卷积神经网络中保留的信息类型,并探讨不同架构选择所引起的InRormces。最后,我们表明输入和网络层之间的互信息最初增加,然后在培训过程中减少,支持Shwartz-Ziv和Tishby(2017)的最新工作,以了解深度学习的信息瓶颈理论。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号