机译:优化GPU集群中多DNN训练的Mapespan和资源利用
School of Computer Science and Technology Hangzhou Dianzi University Hangzhou China;
Artificial Intelligence and Information Systems Research Group School of Computing Engineering and Digital Technologies Teesside University Middlesbrough UK;
School of Computer Science and Technology Hangzhou Dianzi University Hangzhou China;
School of Computer Science and Technology Hangzhou Dianzi University Hangzhou China;
State Key Laboratory for Novel Software Technology Software Institute Nanjing University Nanjing China;
Department of Mathematics and Applications 'R. Caccioppoli' (DMA) of the University of Naples Federico Ⅱ (UNINA) Italy;
Deep neural network (DNN) training; Ring-Allreduce; Job scheduling; Resource allocation; Linear scaling rule (LSR); GPU cluster;
机译:通过任务调度算法优化迷雾计算环境中的薄纱和资源利用
机译:优化医学生缝合技能培养期间的资源利用:随机对照师型师,同行导师主导,全科教学方法
机译:有限的反馈和视频教程可在腹腔镜模拟器训练期间优化学习和资源利用。
机译:利用聚类来优化资源需求估算方法
机译:在GPU和GPU群集上自动转换和优化应用程序。
机译:RGCA:基于有效性能-能源优化的可靠的GPU集群架构用于大规模物联网计算
机译:多任务GPU有效利用的动态资源管理