首页> 外军国防科技报告 >ARL-TR-8580 - Scaling to Multiple Graphics Processing Units (GPUs) in TensorFlow | U.S. Army Research Laboratory
【2h】

ARL-TR-8580 - Scaling to Multiple Graphics Processing Units (GPUs) in TensorFlow | U.S. Army Research Laboratory

机译:ARL-TR-8580 - 在TensorFlow中扩展到多个图形处理单元(GpU)美国陆军研究实验室

代理获取
代理获取并翻译 | 示例

摘要

Although accuracies of neural networks are surpassing human performance, training a deep neural network is a timeconsumingtask due to its increasing high-dimensional parameters. It is not uncommon for the training of deep neuralnetworks to run for a week. Accordingly, the size of neural networks has doubled every 2.4 years, exhibiting an exponentialgrowth from 1958 to 2014. The increasing size of neural network architectures will likely lead to higher computationalcomplexity that will need scalable solutions. To mitigate the computational requirement and maximize throughput, this workfocuses on multi-graphics-processing-unit scalability.

著录项

  • 作者

    Park, Song J.;

  • 作者单位
  • 年(卷),期 2018(),
  • 年度 2018
  • 页码
  • 总页数 16
  • 原文格式 PDF
  • 正文语种
  • 中图分类
  • 网站名称 美国陆军研究实验室
  • 栏目名称 全部文件
  • 关键词

代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号