首页> 外文期刊>International journal of parallel programming >An Empirical Study on Improving the Speed and Generalization of Neural Networks Using a Parallel Circuit Approach
【24h】

An Empirical Study on Improving the Speed and Generalization of Neural Networks Using a Parallel Circuit Approach

机译:利用并行电路方法提高神经网络速度和泛化性的实证研究

获取原文
获取原文并翻译 | 示例

摘要

One of the common problems of neural networks, especially those with many layers, consists of their lengthy training time. We attempt to solve this problem at the algorithmic level, proposing a simple parallel design which is inspired by the parallel circuits found in the human retina. To avoid large matrix calculations, we split the original network vertically into parallel circuits and let the backpropagation algorithm flow in each subnetwork independently. Experimental results have shown the speed advantage of the proposed approach but also point out that this advantage is affected by multiple dependencies. The results also suggest that parallel circuits improve the generalization ability of neural networks presumably due to automatic problem decomposition. By studying network sparsity, we partly justified this theory and proposed a potential method for improving the design.
机译:神经网络,特别是多层神经网络的普遍问题之一是它们的训练时间长。我们尝试在算法水平上解决此问题,提出一种简单的并行设计,该设计受人体视网膜中的并行电路启发。为了避免进行大型矩阵计算,我们将原始网络垂直拆分为并行电路,并让反向传播算法独立地在每个子网中流动。实验结果表明了该方法的速度优势,但同时也指出该优势受多个依赖关系的影响。结果还表明,可能是由于自动问题分解,并行电路提高了神经网络的泛化能力。通过研究网络稀疏性,我们在某种程度上证明了这一理论的合理性,并提出了一种改进设计的潜在方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号