【24h】

Energy and Policy Considerations for Modern Deep Learning Research

机译:现代深度学习研究的能源与政策考虑因素

获取原文

摘要

The field of artificial intelligence has experienced a dramatic methodological shift towards large neural networks trained on plentiful data. This shift has been fueled by recent advances in hardware and techniques enabling remarkable levels of computation, resulting in impressive advances in AI across many applications. However, the massive computation required to obtain these exciting results is costly both financially, due to the price of specialized hardware and electricity or cloud compute time, and to the environment, as a result of non-renewable energy used to fuel modern tensor processing hardware. In a paper published this year at ACL, we brought this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training and tuning neural network models for NLP (Strubell, Ganesh, and McCallum 2019). In this extended abstract, we briefly summarize our findings in NLP, incorporating updated estimates and broader information from recent related publications, and provide actionable recommendations to reduce costs and improve equity in the machine learning and artificial intelligence community.
机译:人工智能领域对大量数据培训的大型神经网络进行了戏剧性的方法转变。这种转变是通过最近的硬件和技术的进步推动了能够显着的计算水平的推动,从而导致许多应用程序的AI令人印象深刻的进步。然而,由于专业硬件和电力或云计算时间和环境的价格,所需的大规模计算是昂贵的,其在经济上,并且由于用于燃料现代张量处理硬件的不可再生能源,因此由于专业的硬件和电力或云计算时间和环境而言。在今年在ACL发布的论文中,我们通过量化NLP培训和调整神经网络模型的大致财务和环境成本来提请NLP研究人员的注意(Strubell,Ganesh和McCallum 2019)。在这一扩展摘要中,我们简要介绍了NLP中的调查结果,从最近的相关出版物中包含更新的估计和更广泛的信息,并提供可行的建议,以降低成本,提高机器学习和人工智能界的股权。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号