首页> 外文期刊>Neurocomputing >Partially black-boxed collective interpretation and its application to SOM-based convolutional neural networks
【24h】

Partially black-boxed collective interpretation and its application to SOM-based convolutional neural networks

机译:部分黑匣子集体解释及其在基于SOM的卷积神经网络的应用

获取原文
获取原文并翻译 | 示例

摘要

This paper aims to extend collective interpretation to networks with complicated components. The collective interpretation is used to generate an internally interpretable model independently of specific inputs and learning conditions. The internally interpretable model is obtained by network compression where multi-layers are sequentially compressed, taking into account all possible routes from inputs to outputs. The network compression is easily applied to fully connected networks, but it cannot be applied to some networks with complicated components. Thus, to make the compression possible, we black-box partially and minimally these components to be replaced by the ordinary components. For demonstrating the effectiveness of this technique, we use here a new model based on the self-organizing map (SOM). Then, we introduce the convolutional neural networks (CNN) for dealing with SOM knowledge, usually represented in two-dimensional lattices. Because our network compression cannot deal with those convolutional components, we temporarily black-box the CNN components. Fixing the other connection weights, we re-train the partially black-boxed network to obtain the simplest prototype model for interpretation. The method was applied to two well-known data sets, and we demonstrated that the present method could compress the networks to get the simplest and interpretable ones. In addition, very stable compressed weights for interpretation could be obtained for easy interpretation. The results suggest that the main mechanism of multi-layered neural networks is based on linear relations between individual inputs and targets, to which peripheral non-linear ones are added. (c) 2021 Elsevier B.V. All rights reserved.
机译:本文旨在将集体解释扩展到具有复杂组件的网络。集体解释用于独立于特定输入和学习条件生成内部可解释模型。内部可解释的模型是通过网络压缩获得的,其中多层被顺序压缩,考虑到从输入到输出的所有可能的路由。网络压缩很容易应用于完全连接的网络,但它不能应用于具有复杂组件的某些网络。因此,为了使压缩成为可能,我们的黑匣子部分地和最小化这些部件被普通组件代替。为了证明这种技术的有效性,我们在这里使用基于自组织地图的新模型(SOM)。然后,我们介绍用于处理SOM知识的卷积神经网络(CNN),通常以二维格子表示。因为我们的网络压缩无法处理那些卷积组件,所以我们暂时黑盒CNN组件。修复其他连接权重,我们重新列车部分黑匣子网络以获得最简单的解释模型。该方法应用于两个众所周知的数据集,并且我们证明了本方法可以压缩网络以获得最简单和可解释的网络。另外,可以获得非常稳定的解释压缩权重,以便于解释。结果表明,多层神经网络的主要机制基于各个输入和目标之间的线性关系,添加到哪个外围非线性的输入。 (c)2021 elestvier b.v.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号