首页> 外文期刊>Neurocomputing >Neural self-compressor: Collective interpretation by compressing multi-layered neural networks into non-layered networks
【24h】

Neural self-compressor: Collective interpretation by compressing multi-layered neural networks into non-layered networks

机译:神经自压缩器:通过将多层神经网络压缩为非分层网络来进行集体解释

获取原文
获取原文并翻译 | 示例

摘要

The present paper proposes a new method called "neural self-compressors" to compress multi-layered neural networks into the simplest possible ones (i.e., without hidden layers) to aid in the interpretation of relations between inputs and outputs. Though neural networks have shown great success in improving generalization, the interpretation of internal representations becomes a serious problem as the number of hidden layers and their corresponding connection weights becomes larger and larger. To overcome this interpretation problem, we introduce a method that compresses multi-layered neural networks into ones without hidden layers. In addition, this method simplifies entangled weights as much as possible by maximizing mutual information between inputs and outputs. In this way, final connection weights can be interpreted as easily as by the logistic regression analysis. The method was applied to four data sets: a symmetric data set, ovarian cancer data set, restaurant data set, and credit card holders' default data set. In the first set, the symmetric data set, we tried to explain how the present method could produce interpretable outputs intuitively. In all the other cases, we succeeded in compressing multi-layered neural networks into their simplest forms with the help of mutual information maximization. In addition, by de-correlating outputs, we were able to transform connection weights from those close to the regression coefficients to ones with more explicit features. (C) 2018 Elsevier B.V. All rights reserved.
机译:本文提出了一种称为“神经自压缩器”的新方法,该方法将多层神经网络压缩为最简单的可能的神经网络(即没有隐藏层),以帮助解释输入和输出之间的关系。尽管神经网络在改善泛化方面已显示出巨大的成功,但是随着隐藏层的数量及其对应的连接权重越来越大,内部表示的解释成为一个严重的问题。为了克服这个解释问题,我们引入了一种将多层神经网络压缩为没有隐藏层的方法。另外,该方法通过最大化输入和输出之间的相互信息来尽可能简化纠缠权重。这样,可以通过逻辑回归分析轻松解释最终连接权重。该方法应用于四个数据集:对称数据集,卵巢癌数据集,餐厅数据集和信用卡持有人的默认数据集。在第一组对称数据集中,我们试图解释本方法如何直观地产生可解释的输出。在所有其他情况下,我们借助相互信息最大化将多层神经网络成功压缩为最简单的形式。此外,通过对输出进行解相关,我们能够将连接权重从接近回归系数的权重转换为具有更明确特征的权重。 (C)2018 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号