首页> 外文会议>NATO Advanced Research Workshop on Concurrent Information Processing and Computing; 20030705-10; Sinaia(RO) >Parallelizing the Training Phase of Back-propagation in a LAN of Workstations
【24h】

Parallelizing the Training Phase of Back-propagation in a LAN of Workstations

机译:并行化工作站局域网中反向传播的培训阶段

获取原文
获取原文并翻译 | 示例

摘要

This article presents the results of some experiments in parallelizing the training phase of a feed-forward, artificial neural network. More specifically, we develop and analyze a parallelization strategy of the widely used neural net learning algorithm called back-propagation. We describe a strategy for parallelizing the back-propagation algorithm. We implemented this algorithm on several LANs, permitting us to evaluate and analyze their performances based on the results of actual runs. We were interested on the qualitative aspect of the analysis, in order to achieve a fair understanding of the factors determining the behavior of this parallel algorithms. We were interested in discovering and dealing with some of the specific circumstances that have to be considered when a parallelized neural net learning algorithm is to be implemented on a set of workstations in a LAN. Part of our purpose is to investigate whether it is possible to exploit the computational resources of such a set of workstations.
机译:本文介绍了一些并行化前馈人工神经网络训练阶段的实验结果。更具体地说,我们开发并分析了广泛使用的称为反向传播的神经网络学习算法的并行化策略。我们描述了一种并行化反向传播算法的策略。我们在多个LAN上实施了该算法,从而使我们能够根据实际运行结果评估和分析其性能。我们对分析的定性方面感兴趣,以便对决定此并行算法行为的因素有一个公平的了解。我们对发现和处理在LAN中的一组工作站上实现并行神经网络学习算法时必须考虑的某些特定情况感兴趣。我们的部分目的是研究是否有可能利用这样一组工作站的计算资源。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号