...
首页> 外文期刊>EPJ Web of Conferences >Large-Scale Distributed Training Applied to Generative Adversarial Networks for Calorimeter Simulation
【24h】

Large-Scale Distributed Training Applied to Generative Adversarial Networks for Calorimeter Simulation

机译:大规模分布式训练应用于量热计模拟的生成对抗网络

获取原文

摘要

In recent years, several studies have demonstrated the benefit of using deep learning to solve typical tasks related to high energy physics data taking and analysis. In particular, generative adversarial networks are a good candidate to supplement the simulation of the detector response in a collider environment. Training of neural network models has been made tractable with the improvement of optimization methods and the advent of GP-GPU well adapted to tackle the highly-parallelizable task of training neural nets. Despite these advancements, training of large models over large data sets can take days to weeks. Even more so, finding the best model architecture and settings can take many expensive trials. To get the best out of this new technology, it is important to scale up the available network-training resources and, consequently, to provide tools for optimal large-scale distributed training. In this context, our development of a new training workflow, which scales on multi-node/multi-GPU architectures with an eye to deployment on high performance computing machines is described. We describe the integration of hyper parameter optimization with a distributed training framework using Message Passing Interface, for models defined in keras [12] or pytorch [13]. We present results on the speedup of training generative adversarial networks trained on a data set composed of the energy deposition from electron, photons, charged and neutral hadrons in a fine grained digital calorimeter.
机译:近年来,几项研究证明了使用深度学习解决与高能物理数据采集和分析有关的典型任务的好处。特别是,生成对抗网络是补充对撞机环境中探测器响应模拟的良好选择。随着优化方法的改进和GP-GPU的出现,神经网络模型的训练变得易于处理了,它非常适合解决训练神经网络的高度可并行化任务。尽管取得了这些进步,但是通过大型数据集训练大型模型可能要花费数天甚至数周的时间。更重要的是,找到最佳的模型架构和设置可能需要进行许多昂贵的尝试。为了从这种新技术中获得最大收益,重要的是扩大可用的网络培训资源,并因此提供最佳的大规模分布式培训工具。在这种情况下,我们描述了一种新的培训工作流程的开发,该工作流程可在多节点/多GPU架构上扩展并着眼于在高性能计算机上的部署。对于在keras [12]或pytorch [13]中定义的模型,我们描述了使用Message Passing Interface将超参数优化与分布式培训框架集成在一起。我们提出的训练生成对抗性网络的加速结果,训练对抗性网络的数据集由细粒度数字量热仪中的电子,光子,带电和中性强子的能量沉积组成。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号