首页> 外文学位 >Improving the Performance of Neural Networks through Parallel Processing in the Cell Broadband Engine.
【24h】

Improving the Performance of Neural Networks through Parallel Processing in the Cell Broadband Engine.

机译:通过单元宽带引擎中的并行处理提高神经网络的性能。

获取原文
获取原文并翻译 | 示例

摘要

This Thesis focuses on the exploration of parallelization approaches for improving the performance of ANN. A main goal of this Thesis is to define the routes for the parallel computation of this problem using the multi-core Cell Broadband Engine. In particular, a new design for parallel tracing of the gradient descent algorithm showed the feasibility for efficient finding of viable solutions for the approximations of 20 non-linear functions and the predictions of 10 time series by neural networks. One objective was to identify the parameters of the gradient descent algorithm which can be used for parallelization of the tasks in 20 function approximation and 10 time series prediction in terms of speed and accuracy of the delivered solutions, and to obtain fast convergence to the optimal solutions. Specifically, for a 20 function approximation, the entrapment in the local minima has been addressed via parallel tracing of the converging trajectories, while verifying the optimality of the solutions. For a 10 function approximation, the original task involves multiple-input-multiple-output multi-dimensional neural networks and thus is challenging for the gradient descent algorithm, posing problems of speed and convergence. In this case, the goal was set to verify the efficiency of the splitting of the multiple-steps forecasting task into several sub-tasks with various forecasting horizons in order to achieve fast and accurate forecasting solutions. The sub-tasks with various forecasting horizons extracted from the complex task would require a simpler type of multiple-input-single-output neural networks. The objective was to demonstrate the improved efficiency of such approach in parallel computing environment to reach fast and accurate solutions with the gradient descent algorithm.
机译:本文重点研究了并行化方法以提高ANN的性能。本文的主要目的是为使用多核Cell Broadband Engine并行计算此问题定义路由。尤其是,一种用于梯度下降算法的并行跟踪的新设计显示了有效找到可行解决方案的可行性,该可行解适用于20个非线性函数的逼近和10个时间序列的神经网络预测。一个目标是确定梯度下降算法的参数,该参数可用于根据交付解决方案的速度和准确性将20个函数逼近和10个时间序列预测中的任务并行化,并快速收敛到最优解决方案。具体来说,对于20函数逼近,已通过收敛轨迹的并行跟踪解决了局部极小值中的陷获,同时验证了解的最优性。对于10函数逼近,原始任务涉及多输入多输出多维神经网络,因此对于梯度下降算法具有挑战性,带来了速度和收敛性问题。在这种情况下,设定目标是为了验证将多步预测任务拆分为具有不同预测范围的几个子任务的效率,以实现快速准确的预测解决方案。从复杂任务中提取的具有各种预测范围的子任务将需要简单类型的多输入单输出神经网络。目的是证明这种方法在并行计算环境中的效率提高,从而可以通过梯度下降算法来获得快速,准确的解决方案。

著录项

  • 作者

    Boiko, Yuri.;

  • 作者单位

    Carleton University (Canada).;

  • 授予单位 Carleton University (Canada).;
  • 学科 Engineering Electronics and Electrical.
  • 学位 M.A.Sc.
  • 年度 2010
  • 页码 134 p.
  • 总页数 134
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号