首页> 外文会议>International Joint Conference on Neural Networks >A New Sensitivity-Based Pruning Technique for Feed-Forward Neural Networks That Improves Generalization
【24h】

A New Sensitivity-Based Pruning Technique for Feed-Forward Neural Networks That Improves Generalization

机译:一种新的基于灵敏度的提升技术,用于提高泛化的前馈神经网络

获取原文
获取外文期刊封面目录资料

摘要

Multi-layer neural networks of the back-propagation type (MLP-networks) became a well-established tool used in various application areas. Reliable solutions require, however, also sufficient generalization capabilities of the formed networks and an easy interpretation of their function. These characteristics are strongly related to less sensitive networks with an optimized network structure. In this paper, we will introduce a new pruning technique called SCGSIR that is inspired by the fast method of scaled conjugate gradients (SCG) and sensitivity analysis. Network sensitivity inhibited during training impacts efficient optimization of network structure. Experiments performed so far yield promising results outperforming the reference techniques when considering both their ability to find networks with optimum architecture and improved generalization.
机译:背部传播类型(MLP网络)的多层神经网络成为各种应用领域的良好工具。然而,可靠的解决方案需要形成网络的足够的概括能力,并且简单地解释其功能。这些特征与具有优化网络结构的敏感网络强烈相关。在本文中,我们将介绍一种新的修剪技术,称为SCGSIR,这是由缩放共轭梯度(SCG)和敏感性分析的快速方法的启发。网络敏感度在训练期间抑制了网络结构的有效优化。到目前为止执行的实验产生了在考虑其寻求具有最佳架构和改进的泛化的网络的能力时表现出参考技术。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号