首页> 外文会议>24th ACM international conference on supercomputing 2010 >ParaLearn: A Massively Parallel, Scalable System for Learning Interaction Networks on FPGAs
【24h】

ParaLearn: A Massively Parallel, Scalable System for Learning Interaction Networks on FPGAs

机译:ParaLearn:一种大规模并行,可扩展的系统,用于学习FPGA上的交互网络

获取原文
获取原文并翻译 | 示例

摘要

ParaLearn is a scalable, parallel FPGA-based system for learning interaction networks using Bayesian statistics. ParaLearn includes problem specific parallel/scalable algorithms, system software and hardware architecture to address this complex problem.rnLearning interaction networks from data uncovers causal relationships and allows scientists to predict and explain a system's behavior. Interaction networks have applications in many fields, though we will discuss them particularly in the field of personalized medicine where state of the art high-throughput experiments generate extensive data on gene expression, DNA sequencing and protein abundance. In this paper we demonstrate how ParaLearn models Signaling Networks in human T-Cells.rnWe show greater than 2000 fold speedup on a Field Programmable Gate Array when compared to a baseline conventional implementation on a General Purpose Processor (GPP), a 2.38 fold speedup compared to a heavily optimized parallel GPP implementation, and between 2.74 and 6.15 fold power savings over the optimized GPP. Through using current generation FPGA technology and caching optimizations, we further project speedups of up to 8.15 fold, relative to the optimized GPP. Compared to software approaches, ParaLearn is faster, more power efficient, and can support novel learning algorithms that substantially improve the precision and robustness of the results.
机译:ParaLearn是一个可扩展的,基于并行FPGA的系统,用于使用贝叶斯统计信息学习交互网络。 ParaLearn包括针对特定问题的并行/可扩展算法,系统软件和硬件体系结构,以解决这一复杂问题。从数据中学习交互网络可发现因果关系,并使科学家能够预测和解释系统的行为。交互网络在许多领域都有应用,尽管我们将在个性化医学领域进行特别讨论,在该领域中,最先进的高通量实验会生成有关基因表达,DNA测序和蛋白质丰度的大量数据。在本文中,我们演示了ParaLearn如何在人类T细胞中为信令网络建模.rn与通用处理器(GPP)上的基准常规实现相比,现场可编程门阵列的速度提高了2000倍以上与高度优化的并行GPP实施相比,其功耗比优化的GPP节省了2.74至6.15倍。通过使用最新的FPGA技术和缓存优化,相对于优化的GPP,我们可以进一步将速度提高多达8.15倍。与软件方法相比,ParaLearn更快,更节能,并且可以支持新颖的学习算法,从而显着提高结果的准确性和鲁棒性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号