...
首页> 外文期刊>International Journal of Quantum Chemistry >Constructing high-dimensional neural network potentials: A tutorial review
【24h】

Constructing high-dimensional neural network potentials: A tutorial review

机译:构建高维神经网络潜力:教程复习

获取原文
获取原文并翻译 | 示例
           

摘要

A lot of progress has been made in recent years in the development of atomistic potentials using machine learning (ML) techniques. In contrast to most conventional potentials, which are based on physical approximations and simplifications to derive an analytic functional relation between the atomic configuration and the potential-energy, ML potentials rely on simple but very flexible mathematical terms without a direct physical meaning. Instead, in case of ML potentials the topology of the potential-energy surface is learned by adjusting a number of parameters with the aim to reproduce a set of reference electronic structure data as accurately as possible. Due to this bias-free construction, they are applicable to a wide range of systems without changes in their functional form, and a very high accuracy close to the underlying first-principles data can be obtained. Neural network potentials (NNPs), which have first been proposed about two decades ago, are an important class of ML potentials. Although the first NNPs have been restricted to small molecules with only a few degrees of freedom, they are now applicable to high-dimensional systems containing thousands of atoms, which enables addressing a variety of problems in chemistry, physics, and materials science. In this tutorial review, the basic ideas of NNPs are presented with a special focus on developing NNPs for high-dimensional condensed systems. A recipe for the construction of these potentials is given and remaining limitations of the method are discussed. (c) 2015 Wiley Periodicals, Inc.
机译:近年来,在使用机器学习(ML)技术开发原子势方面取得了许多进展。与大多数常规势能基于物理近似和简化以得出原子构型和势能之间的解析功能关系相反,ML势能依赖简单但非常灵活的数学术语,而没有直接的物理含义。相反,在ML电位的情况下,通过调整多个参数来学习电位-能量表面的拓扑,目的是尽可能准确地复制一组参考电子结构数据。由于这种无偏差的结构,它们适用于各种系统,而无需改变其功能形式,并且可以获得非常接近基础第一原理数据的高精度。神经网络电势(NNPs)是大约20年前首次提出的,是一类重要的机器学习电势。尽管第一个NNP仅限于具有几个自由度的小分子,但它们现在可应用于包含数千个原子的高维系统,这使得能够解决化学,物理和材料科学中的各种问题。在本教程的复习中,介绍了NNP的基本思想,并特别着重于为高维凝聚系统开发NNP。给出了构建这些电位的方法,并讨论了该方法的其他局限性。 (c)2015年威利期刊有限公司

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号