首页> 外文期刊>Parallel Computing >Optimizing neural networks on SIMD parallel computers
【24h】

Optimizing neural networks on SIMD parallel computers

机译:在SIMD并行计算机上优化神经网络

获取原文
获取原文并翻译 | 示例

摘要

Hopfield neural networks are often used to solve difficult combinatorial optimization problems. Multiple restarts versions find better solutions but are slow on serial computers. Here, we study two parallel implementations on SIMD computers of multiple restarts Hopfield networks for solving the maximum clique problem. The first one is a fine-grained implementation on the Kestrel Parallel Processor, a linear SIMD array designed and built the University of California, Santa Cruz. The second one is an implementation on the MasPar MP-2 according to the "SIMD Phase Programming Model", a new method to solve asynchronous, irregular problems on SIMD machines. We find that the neural networks map well to the parallel architectures and afford substantial speedups with respect to the serial program, without sacrificing solution quality.
机译:Hopfield神经网络通常用于解决组合优化难题。多个重新启动版本可找到更好的解决方案,但在串行计算机上运行速度较慢。在这里,我们研究了多次重启Hopfield网络的SIMD计算机上的两种并行实现,以解决最大集团问题。第一个是在Kestrel并行处理器(一种线性SIMD阵列)上的细粒度实现,该线性SIMD阵列设计并构建了加利福尼亚大学圣克鲁斯分校。第二个是根据“ SIMD阶段编程模型”在MasPar MP-2上的实现,这是一种解决SIMD机器上的异步,不规则问题的新方法。我们发现神经网络很好地映射到并行体系结构,并在不牺牲解决方案质量的前提下,相对于串行程序提供了实质性的加速。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号