...
首页> 外文期刊>IEEE Transactions on Neural Networks >Efficient training algorithms for a class of shunting inhibitory convolutional neural networks
【24h】

Efficient training algorithms for a class of shunting inhibitory convolutional neural networks

机译:一类分流抑制卷积神经网络的有效训练算法

获取原文
获取原文并翻译 | 示例
           

摘要

This article presents some efficient training algorithms, based on first-order, second-order, and conjugate gradient optimization methods, for a class of convolutional neural networks (CoNNs), known as shunting inhibitory convolution neural networks. Furthermore, a new hybrid method is proposed, which is derived from the principles of Quickprop, Rprop, SuperSAB, and least squares (LS). Experimental results show that the new hybrid method can perform as well as the Levenberg-Marquardt (LM) algorithm, but at a much lower computational cost and less memory storage. For comparison sake, the visual pattern recognition task of faceonface discrimination is chosen as a classification problem to evaluate the performance of the training algorithms. Sixteen training algorithms are implemented for the three different variants of the proposed CoNN architecture: binary-, Toeplitz- and fully connected architectures. All implemented algorithms can train the three network architectures successfully, but their convergence speed vary markedly. In particular, the combination of LS with the new hybrid method and LS with the LM method achieve the best convergence rates in terms of number of training epochs. In addition, the classification accuracies of all three architectures are assessed using ten-fold cross validation. The results show that the binary- and Toeplitz-connected architectures outperform slightly the fully connected architecture: the lowest error rates across all training algorithms are 1.95% for Toeplitz-connected, 2.10% for the binary-connected, and 2.20% for the fully connected network. In general, the modified Broyden-Fletcher-Goldfarb-Shanno (BFGS) methods, the three variants of LM algorithm, and the new hybrid/LS method perform consistently well, achieving error rates of less than 3% averaged across all three architectures.
机译:本文针对一类卷积神经网络(CoNN)(称为分流抑制卷积神经网络),提供了基于一阶,二阶和共轭梯度优化方法的有效训练算法。此外,提出了一种新的混合方法,该方法是从Quickprop,Rprop,SuperSAB和最小二乘(LS)的原理派生而来的。实验结果表明,这种新的混合方法可以达到与Levenberg-Marquardt(LM)算法一样好的性能,但计算成本却低得多,内存存储量也更少。为了比较起见,选择人脸/非人脸识别的视觉模式识别任务作为分类问题,以评估训练算法的性能。针对拟议的CoNN架构的三种不同变体,实施了16种训练算法:二进制,Toeplitz和完全连接的架构。所有实现的算法都可以成功地训练这三种网络体系结构,但是它们的收敛速度明显不同。特别是,将LS与新的混合方法相结合以及LS与LM方法相结合,可以在训练时期数方面实现最佳收敛速度。此外,使用十倍交叉验证来评估所有三种架构的分类准确性。结果表明,二进制连接和Toeplitz连接的体系结构的性能略好于完全连接的体系结构:在所有训练算法中,最低的错误率分别为:Toeplitz连接的1.95%,二进制连接的2.10%和完全连接的2.20%网络。通常,改进的Broyden-Fletcher-Goldfarb-Shanno(BFGS)方法,LM算法的三个变体以及新的hybrid / LS方法始终表现良好,在这三个体系结构中平均错误率均不到3%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号